Mirantis Container Cloud Documentation

The documentation is intended to help operators understand the core concepts of the product.

The information provided in this documentation set is being constantly improved and amended based on the feedback and kind requests from our software consumers. This documentation set outlines description of the features supported within three latest Container Cloud minor releases and their supported Cluster releases, with a corresponding note Available since <release-version>.

The following table lists the guides included in the documentation set you are reading:

Guides list

Guide

Purpose

Reference Architecture

Learn the fundamentals of Container Cloud reference architecture to plan your deployment.

Deployment Guide

Deploy Container Cloud of a preferred configuration using supported deployment profiles tailored to the demands of specific business cases.

Operations Guide

Deploy and operate the Container Cloud managed clusters.

Release Compatibility Matrix

Deployment compatibility of the Container Cloud components versions for each product release.

Release Notes

Learn about new features and bug fixes in the current Container Cloud version as well as in the Container Cloud minor releases.

Intended audience

This documentation assumes that the reader is familiar with network and cloud concepts and is intended for the following users:

  • Infrastructure Operator

    • Is member of the IT operations team

    • Has working knowledge of Linux, virtualization, Kubernetes API and CLI, and OpenStack to support the application development team

    • Accesses Mirantis Container Cloud and Kubernetes through a local machine or web UI

    • Provides verified artifacts through a central repository to the Tenant DevOps engineers

  • Tenant DevOps engineer

    • Is member of the application development team and reports to line-of-business (LOB)

    • Has working knowledge of Linux, virtualization, Kubernetes API and CLI to support application owners

    • Accesses Container Cloud and Kubernetes through a local machine or web UI

    • Consumes artifacts from a central repository approved by the Infrastructure Operator

Conventions

This documentation set uses the following conventions in the HTML format:

Documentation conventions

Convention

Description

boldface font

Inline CLI tools and commands, titles of the procedures and system response examples, table titles.

monospaced font

Files names and paths, Helm charts parameters and their values, names of packages, nodes names and labels, and so on.

italic font

Information that distinguishes some concept or term.

Links

External links and cross-references, footnotes.

Main menu > menu item

GUI elements that include any part of interactive user interface and menu navigation.

Superscript

Some extra, brief information. For example, if a feature is available from a specific release or if a feature is in the Technology Preview development stage.

Note

The Note block

Messages of a generic meaning that may be useful to the user.

Caution

The Caution block

Information that prevents a user from mistakes and undesirable consequences when following the procedures.

Warning

The Warning block

Messages that include details that can be easily missed, but should not be ignored by the user and are valuable before proceeding.

See also

The See also block

List of references that may be helpful for understanding of some related tools, concepts, and so on.

Learn more

The Learn more block

Used in the Release Notes to wrap a list of internal references to the reference architecture, deployment and operation procedures specific to a newly implemented product feature.

Technology Preview features

A Technology Preview feature provides early access to upcoming product innovations, allowing customers to experiment with the functionality and provide feedback.

Technology Preview features may be privately or publicly available but neither are intended for production use. While Mirantis will provide assistance with such features through official channels, normal Service Level Agreements do not apply.

As Mirantis considers making future iterations of Technology Preview features generally available, we will do our best to resolve any issues that customers experience when using these features.

During the development of a Technology Preview feature, additional components may become available to the public for evaluation. Mirantis cannot guarantee the stability of such features. As a result, if you are using Technology Preview features, you may not be able to seamlessly update to subsequent product releases, as well as upgrade or migrate to the functionality that has not been announced as full support yet.

Mirantis makes no guarantees that Technology Preview features will graduate to generally available features.

Documentation history

The documentation set refers to Mirantis Container Cloud GA as to the latest released GA version of the product. For details about the Container Cloud GA minor releases dates, refer to Container Cloud releases.

Product Overview

Mirantis Container Cloud enables you to ship code faster by enabling speed with choice, simplicity, and security. Through a single pane of glass you can deploy, manage, and observe Kubernetes clusters on bare metal infrastructure.

The list of the most common use cases includes:

Kubernetes cluster lifecycle management

The consistent lifecycle management of a single Kubernetes cluster is a complex task on its own that is made infinitely more difficult when you have to manage multiple clusters across different platforms spread across the globe. Mirantis Container Cloud provides a single, centralized point from which you can perform full lifecycle management of your container clusters, including automated updates and upgrades.

Highly regulated industries

Regulated industries need a fine level of access control granularity, high security standards and extensive reporting capabilities to ensure that they can meet and exceed the security standards and requirements. Mirantis Container Cloud provides for a fine-grained Role Based Access Control (RBAC) mechanism and easy integration and federation to existing identity management systems (IDM).

Logging, monitoring, alerting

A complete operational visibility is required to identify and address issues in the shortest amount of time – before the problem becomes serious. Mirantis StackLight is the proactive monitoring, logging, and alerting solution designed for large-scale container and cloud observability with extensive collectors, dashboards, trend reporting and alerts.

Storage

Cloud environments require a unified pool of storage that can be scaled up by simply adding storage server nodes. Ceph is a unified, distributed storage system designed for excellent performance, reliability, and scalability. Deploy Ceph utilizing Rook to provide and manage a robust persistent storage that can be used by Kubernetes workloads on the baremetal-based clusters.

Security

Security is a core concern for all enterprises, especially with more of our systems being exposed to the Internet as a norm. Mirantis Container Cloud provides for a multi-layered security approach that includes effective identity management and role based authentication, secure out of the box defaults and extensive security scanning and monitoring during the development process.

5G and Edge

The introduction of 5G technologies and the support of Edge workloads requires an effective multi-tenant solution to manage the underlying container infrastructure. Mirantis Container Cloud provides for a full stack, secure, multi-cloud cluster management and Day-2 operations solution.

Reference Architecture

Overview

Mirantis Container Cloud is a set of microservices that are deployed using Helm charts and run in a Kubernetes cluster. Container Cloud is based on the Kubernetes Cluster API community initiative.

The following diagram illustrates an overview of Container Cloud and the clusters it manages:

_images/cluster-overview.png

All artifacts used by Kubernetes and workloads are stored on the Container Cloud content delivery network (CDN):

  • mirror.mirantis.com (Debian packages including the Ubuntu mirrors)

  • binary.mirantis.com (Helm charts and binary artifacts)

  • mirantis.azurecr.io (Docker image registry)

All Container Cloud components are deployed in the Kubernetes clusters. All Container Cloud APIs are implemented using the Kubernetes Custom Resource Definition (CRD) that represents custom objects stored in Kubernetes and allows you to expand Kubernetes API.

The Container Cloud logic is implemented using controllers. A controller handles the changes in custom resources defined in the controller CRD. A custom resource consists of a spec that describes the desired state of a resource provided by a user. During every change, a controller reconciles the external state of a custom resource with the user parameters and stores this external state in the status subresource of its custom resource.

Container Cloud cluster types

The types of the Container Cloud clusters include:

Bootstrap cluster
  • Runs the bootstrap process on a seed data center bare metal node that can be reused after the management cluster deployment for other purposes.

  • Requires access to the bare metal provider backend.

  • Initially, the bootstrap cluster is created with the following minimal set of components: Bootstrap Controller, public API charts, and the Bootstrap API.

  • The user can interact with the bootstrap cluster through the Bootstrap API to create the configuration for a management cluster and start its deployment. More specifically, the user performs the following operations:

    1. Create required deployment objects.

    2. Optionally add proxy and SSH keys.

    3. Configure the cluster and machines.

    4. Deploy a management cluster.

  • The user can monitor the deployment progress of the cluster and machines.

  • After a successful deployment, the user can download the kubeconfig artifact of the provisioned cluster.

Management cluster

Comprises Container Cloud as product and provides the following functionality:

  • Runs all public APIs and services including the web UIs of Container Cloud.

  • Does not require access to any provider backend.

  • Runs the provider-specific services and internal API including LCMMachine and LCMCluster. Also, it runs an LCM controller for orchestrating managed clusters and other controllers for handling different resources.

  • Requires two-way access to a provider backend. The provider connects to a backend to spawn managed cluster nodes, and the agent running on the nodes accesses the regional cluster to obtain the deployment information.

For deployment details of a management cluster, see Deployment Guide.

Managed cluster
  • A Mirantis Kubernetes Engine (MKE) cluster that an end user creates using the Container Cloud web UI.

  • Requires access to its management cluster. Each node of a managed cluster runs an LCM Agent that connects to the LCM machine of the management cluster to obtain the deployment details.

  • Supports Mirantis OpenStack for Kubernetes (MOSK). For details, see MOSK documentation.

All types of the Container Cloud clusters except the bootstrap cluster are based on the MKE and Mirantis Container Runtime (MCR) architecture. For details, see MKE and MCR documentation.

The following diagram illustrates the distribution of services between each type of the Container Cloud clusters:

_images/cluster-types.png

Container Cloud provider

The Mirantis Container Cloud provider is the central component of Container Cloud that provisions a node of a management or managed cluster and runs the LCM Agent on this node. It runs in a management cluster and requires connection to a provider backend.

The Container Cloud provider interacts with the following types of public API objects:

Public API object name

Description

Container Cloud release object

Contains the following information about clusters:

  • Version of the supported Cluster release for a management cluster

  • List of supported Cluster releases for the managed clusters and supported upgrade path

  • Description of Helm charts that are installed on the management cluster

Cluster release object

  • Provides a specific version of a management or managed cluster. Any Cluster release object, as well as a Container Cloud release object never changes, only new releases can be added. Any change leads to a new release of a cluster.

  • Contains references to all components and their versions that are used to deploy all cluster types:

    • LCM components:

      • LCM Agent

      • Ansible playbooks

      • Scripts

      • Description of steps to execute during a cluster deployment and upgrade

      • Helm Controller image references

    • Supported Helm charts description:

      • Helm chart name and version

      • Helm release name

      • Helm values

Cluster object

  • References the Credentials, KaaSRelease and ClusterRelease objects.

  • Represents all cluster-level resources, for example, networks, load balancer for the Kubernetes API, and so on. It uses data from the Credentials object to create these resources and data from the KaaSRelease and ClusterRelease objects to ensure that all lower-level cluster objects are created.

Machine object

  • References the Cluster object.

  • Represents one node of a managed cluster and contains all data to provision it.

Credentials object

Contains all information necessary to connect to a provider backend.

PublicKey object

Is provided to every machine to obtain an SSH access.

The following diagram illustrates the Container Cloud provider data flow:

_images/provider-dataflow.png

The Container Cloud provider performs the following operations in Container Cloud:

  • Consumes the below types of data from a management cluster:

    • Credentials to connect to a provider backend

    • Deployment instructions from the KaaSRelease and ClusterRelease objects

    • The cluster-level parameters from the Cluster objects

    • The machine-level parameters from the Machine objects

  • Prepares data for all Container Cloud components:

    • Creates the LCMCluster and LCMMachine custom resources for LCM Controller and LCM Agent. The LCMMachine custom resources are created empty to be later handled by the LCM Controller.

    • Creates the HelmBundle custom resources for the Helm Controller using data from the KaaSRelease and ClusterRelease objects.

    • Creates service accounts for these custom resources.

    • Creates a scope in Identity and access management (IAM) for a user access to a managed cluster.

  • Provisions nodes for a managed cluster using the cloud-init script that downloads and runs the LCM Agent.

  • Installs Helm Controller as a Helm v3 chart.

Release Controller

The Mirantis Container Cloud Release Controller is responsible for the following functionality:

  • Monitor and control the KaaSRelease and ClusterRelease objects present in a management cluster. If any release object is used in a cluster, the Release Controller prevents the deletion of such an object.

  • Sync the KaaSRelease and ClusterRelease objects published at https://binary.mirantis.com/releases/ with an existing management cluster.

  • Trigger the Container Cloud auto-update procedure if a new KaaSRelease object is found:

    1. Search for the managed clusters with old Cluster releases that are not supported by a new Container Cloud release. If any are detected, abort the auto-update and display a corresponding note about an old Cluster release in the Container Cloud web UI for the managed clusters. In this case, a user must update all managed clusters using the Container Cloud web UI. Once all managed clusters are updated to the Cluster releases supported by a new Container Cloud release, the Container Cloud auto-update is retriggered by the Release Controller.

    2. Trigger the Container Cloud release update of all Container Cloud components in a management cluster. The update itself is processed by the Container Cloud provider.

    3. Trigger the Cluster release update of a management cluster to the Cluster release version that is indicated in the updated Container Cloud release version. The LCMCluster components, such as MKE, are updated before the HelmBundle components, such as StackLight or Ceph.

      Once a management cluster is updated, an option to update a managed cluster becomes available in the Container Cloud web UI. During a managed cluster update, all cluster components including Kubernetes are automatically updated to newer versions if available. The LCMCluster components, such as MKE, are updated before the HelmBundle components, such as StackLight or Ceph.

The Operator can delay the Container Cloud automatic upgrade procedure for a limited amount of time or schedule upgrade to run at desired hours or weekdays. For details, see Schedule Mirantis Container Cloud updates.

Container Cloud remains operational during the management cluster upgrade. Managed clusters are not affected during this upgrade. For the list of components that are updated during the Container Cloud upgrade, see the Components versions section of the corresponding Container Cloud release in Release Notes.

When Mirantis announces support of the newest versions of Mirantis Container Runtime (MCR) and Mirantis Kubernetes Engine (MKE), Container Cloud automatically upgrades these components as well. For the maintenance window best practices before upgrade of these components, see MKE Documentation.

See also

Patch releases

Web UI

The Mirantis Container Cloud web UI is mainly designed to create and update the managed clusters as well as add or remove machines to or from an existing managed cluster.

You can use the Container Cloud web UI to obtain the management cluster details including endpoints, release version, and so on. The management cluster update occurs automatically with a new release change log available through the Container Cloud web UI.

The Container Cloud web UI is a JavaScript application that is based on the React framework. The Container Cloud web UI is designed to work on a client side only. Therefore, it does not require a special backend. It interacts with the Kubernetes and Keycloak APIs directly. The Container Cloud web UI uses a Keycloak token to interact with Container Cloud API and download kubeconfig for the management and managed clusters.

The Container Cloud web UI uses NGINX that runs on a management cluster and handles the Container Cloud web UI static files. NGINX proxies the Kubernetes and Keycloak APIs for the Container Cloud web UI.

Bare metal

The bare metal service provides for the discovery, deployment, and management of bare metal hosts.

The bare metal management in Mirantis Container Cloud is implemented as a set of modular microservices. Each microservice implements a certain requirement or function within the bare metal management system.

Bare metal components

The bare metal management solution for Mirantis Container Cloud includes the following components:

Bare metal components

Component

Description

OpenStack Ironic

The backend bare metal manager in a standalone mode with its auxiliary services that include httpd, dnsmasq, and mariadb.

OpenStack Ironic Inspector

Introspects and discovers the bare metal hosts inventory. Includes OpenStack Ironic Python Agent (IPA) that is used as a provision-time agent for managing bare metal hosts.

Ironic Operator

Monitors changes in the external IP addresses of httpd, ironic, and ironic-inspector and automatically reconciles the configuration for dnsmasq, ironic, baremetal-provider, and baremetal-operator.

Bare Metal Operator

Manages bare metal hosts through the Ironic API. The Container Cloud bare-metal operator implementation is based on the Metal³ project.

Bare metal resources manager

Ensures that the bare metal provisioning artifacts such as the distribution image of the operating system is available and up to date.

cluster-api-provider-baremetal

The plugin for the Kubernetes Cluster API integrated with Container Cloud. Container Cloud uses the Metal³ implementation of cluster-api-provider-baremetal for the Cluster API.

HAProxy

Load balancer for external access to the Kubernetes API endpoint.

LCM Agent

Used for physical and logical storage, physical and logical network, and control over the life cycle of a bare metal machine resources.

Ceph

Distributed shared storage is required by the Container Cloud services to create persistent volumes to store their data.

MetalLB

Load balancer for Kubernetes services on bare metal. 1

Keepalived

Monitoring service that ensures availability of the virtual IP for the external load balancer endpoint (HAProxy). 1

IPAM

IP address management services provide consistent IP address space to the machines in bare metal clusters. See details in IP Address Management.

1(1,2)

For details, see Built-in load balancing.

The diagram below summarizes the following components and resource kinds:

  • Metal³-based bare metal management in Container Cloud (white)

  • Internal APIs (yellow)

  • External dependency components (blue)

_images/bm-component-stack.png
Bare metal networking

This section provides an overview of the networking configuration and the IP address management in the Mirantis Container Cloud on bare metal.

IP Address Management

Mirantis Container Cloud on bare metal uses IP Address Management (IPAM) to keep track of the network addresses allocated to bare metal hosts. This is necessary to avoid IP address conflicts and expiration of address leases to machines through DHCP.

Note

Only IPv4 address family is currently supported by Container Cloud and IPAM. IPv6 is not supported and not used in Container Cloud.

IPAM is provided by the kaas-ipam controller. Its functions include:

  • Allocation of IP address ranges or subnets to newly created clusters using the Subnet resource.

    Note

    Before Container Cloud 2.27.0 (Cluster releases 17.1.0, 16.1.0, or earlier) the deprecated SubnetPool resource was also used for this purpose. For details, see MOSK Deprecation Notes: SubnetPool resource management.

  • Allocation of IP addresses to machines and cluster services at the request of baremetal-provider using the IpamHost and IPaddr resources.

  • Creation and maintenance of host networking configuration on the bare metal hosts using the IpamHost resources.

The IPAM service can support different networking topologies and network hardware configurations on the bare metal hosts.

In the most basic network configuration, IPAM uses a single L3 network to assign addresses to all bare metal hosts, as defined in Managed cluster networking.

You can apply complex networking configurations to a bare metal host using the L2 templates. The L2 templates imply multihomed host networking and enable you to create a managed cluster where nodes use separate host networks for different types of traffic. Multihoming is required to ensure the security and performance of a managed cluster.

Caution

Modification of L2 templates in use is allowed with a mandatory validation step from the Infrastructure Operator to prevent accidental cluster failures due to unsafe changes. The list of risks posed by modifying L2 templates includes:

  • Services running on hosts cannot reconfigure automatically to switch to the new IP addresses and/or interfaces.

  • Connections between services are interrupted unexpectedly, which can cause data loss.

  • Incorrect configurations on hosts can lead to irrevocable loss of connectivity between services and unexpected cluster partition or disassembly.

For details, see Modify network configuration on an existing machine.

Management cluster networking

The main purpose of networking in a Container Cloud management cluster is to provide access to the Container Cloud Management API that consists of:

  • Container Cloud Public API

    Used by end users to provision and configure managed clusters and machines. Includes the Container Cloud web UI.

  • Container Cloud LCM API

    Used by LCM agents in managed clusters to obtain configuration and report status. Contains provider-specific services and internal API including LCMMachine and LCMCluster objects.

The following types of networks are supported for the management clusters in Container Cloud:

  • PXE network

    Enables PXE boot of all bare metal machines in the Container Cloud region.

    • PXE subnet

      Provides IP addresses for DHCP and network boot of the bare metal hosts for initial inspection and operating system provisioning. This network may not have the default gateway or a router connected to it. The PXE subnet is defined by the Container Cloud Operator during bootstrap.

      Provides IP addresses for the bare metal management services of Container Cloud, such as bare metal provisioning service (Ironic). These addresses are allocated and served by MetalLB.

  • Management network

    Connects LCM Agents running on the hosts to the Container Cloud LCM API. Serves the external connections to the Container Cloud Management API. The network is also used for communication between kubelet and the Kubernetes API server inside a Kubernetes cluster. The MKE components use this network for communication inside a swarm cluster.

    • LCM subnet

      Provides IP addresses for the Kubernetes nodes in the management cluster. This network also provides a Virtual IP (VIP) address for the load balancer that enables external access to the Kubernetes API of a management cluster. This VIP is also the endpoint to access the Container Cloud Management API in the management cluster.

      Provides IP addresses for the externally accessible services of Container Cloud, such as Keycloak, web UI, StackLight. These addresses are allocated and served by MetalLB.

  • Kubernetes workloads network

    Technology Preview

    Serves the internal traffic between workloads on the management cluster.

    • Kubernetes workloads subnet

      Provides IP addresses that are assigned to nodes and used by Calico.

  • Out-of-Band (OOB) network

    Connects to Baseboard Management Controllers of the servers that host the management cluster. The OOB subnet must be accessible from the management network through IP routing. The OOB network is not managed by Container Cloud and is not represented in the IPAM API.

Managed cluster networking

A Kubernetes cluster networking is typically focused on connecting pods on different nodes. On bare metal, however, the cluster networking is more complex as it needs to facilitate many different types of traffic.

Kubernetes clusters managed by Mirantis Container Cloud have the following types of traffic:

  • PXE network

    Enables the PXE boot of all bare metal machines in Container Cloud. This network is not configured on the hosts in a managed cluster. It is used by the bare metal provider to provision additional hosts in managed clusters and is disabled on the hosts after provisioning is done.

  • Life-cycle management (LCM) network

    Connects LCM Agents running on the hosts to the Container Cloud LCM API. The LCM API is provided by the management cluster. The LCM network is also used for communication between kubelet and the Kubernetes API server inside a Kubernetes cluster. The MKE components use this network for communication inside a swarm cluster.

    When using the BGP announcement of the IP address for the cluster API load balancer, which is available as Technology Preview since Container Cloud 2.24.4, no segment stretching is required between Kubernetes master nodes. Also, in this scenario, the load balancer IP address is not required to match the LCM subnet CIDR address.

    • LCM subnet(s)

      Provides IP addresses that are statically allocated by the IPAM service to bare metal hosts. This network must be connected to the Kubernetes API endpoint of the management cluster through an IP router.

      LCM Agents running on managed clusters will connect to the management cluster API through this router. LCM subnets may be different per managed cluster as long as this connection requirement is satisfied.

      The Virtual IP (VIP) address for load balancer that enables access to the Kubernetes API of the managed cluster must be allocated from the LCM subnet.

    • Cluster API subnet

      Technology Preview

      Provides a load balancer IP address for external access to the cluster API. Mirantis recommends that this subnet stays unique per managed cluster.

  • Kubernetes workloads network

    Serves as an underlay network for traffic between pods in the managed cluster. Do not share this network between clusters.

    • Kubernetes workloads subnet(s)

      Provides IP addresses that are statically allocated by the IPAM service to all nodes and that are used by Calico for cross-node communication inside a cluster. By default, VXLAN overlay is used for Calico cross-node communication.

  • Kubernetes external network

    Serves ingress traffic to the managed cluster from the outside world. You can share this network between clusters, but with dedicated subnets per cluster. Several or all cluster nodes must be connected to this network. Traffic from external users to the externally available Kubernetes load-balanced services comes through the nodes that are connected to this network.

    • Services subnet(s)

      Provides IP addresses for externally available Kubernetes load-balanced services. The address ranges for MetalLB are assigned from this subnet. There can be several subnets per managed cluster that define the address ranges or address pools for MetalLB.

    • External subnet(s)

      Provides IP addresses that are statically allocated by the IPAM service to nodes. The IP gateway in this network is used as the default route on all nodes that are connected to this network. This network allows external users to connect to the cluster services exposed as Kubernetes load-balanced services. MetalLB speakers must run on the same nodes. For details, see Configure node selector for MetalLB speaker.

  • Storage network

    Serves storage access and replication traffic from and to Ceph OSD services. The storage network does not need to be connected to any IP routers and does not require external access, unless you want to use Ceph from outside of a Kubernetes cluster. To use a dedicated storage network, define and configure both subnets listed below.

    • Storage access subnet(s)

      Provides IP addresses that are statically allocated by the IPAM service to Ceph nodes. The Ceph OSD services bind to these addresses on their respective nodes. Serves Ceph access traffic from and to storage clients. This is a public network in Ceph terms. 1

    • Storage replication subnet(s)

      Provides IP addresses that are statically allocated by the IPAM service to Ceph nodes. The Ceph OSD services bind to these addresses on their respective nodes. Serves Ceph internal replication traffic. This is a cluster network in Ceph terms. 1

  • Out-of-Band (OOB) network

    Connects baseboard management controllers (BMCs) of the bare metal hosts. This network must not be accessible from the managed clusters.

The following diagram illustrates the networking schema of the Container Cloud deployment on bare metal with a managed cluster:

_images/bm-cluster-l3-networking-multihomed.png
1(1,2)

For more details about Ceph networks, see Ceph Network Configuration Reference.

Host networking

The following network roles are defined for all Mirantis Container Cloud clusters nodes on bare metal including the bootstrap, management and managed cluster nodes:

  • Out-of-band (OOB) network

    Connects the Baseboard Management Controllers (BMCs) of the hosts in the network to Ironic. This network is out of band for the host operating system.

  • PXE network

    Enables remote booting of servers through the PXE protocol. In management clusters, DHCP server listens on this network for hosts discovery and inspection. In managed clusters, hosts use this network for the initial PXE boot and provisioning.

  • LCM network

    Connects LCM Agents running on the node to the LCM API of the management cluster. It is also used for communication between kubelet and the Kubernetes API server inside a Kubernetes cluster. The MKE components use this network for communication inside a swarm cluster. In management clusters, it is replaced by the management network.

  • Kubernetes workloads (pods) network

    Technology Preview

    Serves connections between Kubernetes pods. Each host has an address on this network, and this address is used by Calico as an endpoint to the underlay network.

  • Kubernetes external network

    Technology Preview

    Serves external connection to the Kubernetes API and the user services exposed by the cluster. In management clusters, it is replaced by the management network.

  • Management network

    Serves external connections to the Container Cloud Management API and services of the management cluster. Not available in a managed cluster.

  • Storage access network

    Connects Ceph nodes to the storage clients. The Ceph OSD service is bound to the address on this network. This is a public network in Ceph terms. 0

  • Storage replication network

    Connects Ceph nodes to each other. Serves internal replication traffic. This is a cluster network in Ceph terms. 0

Each network is represented on the host by a virtual Linux bridge. Physical interfaces may be connected to one of the bridges directly, or through a logical VLAN subinterface, or combined into a bond interface that is in turn connected to a bridge.

The following table summarizes the default names used for the bridges connected to the networks listed above:

Management cluster

Network type

Bridge name

Assignment method TechPreview

OOB network

N/A

N/A

PXE network

bm-pxe

By a static interface name

Management network

k8s-lcm 2

By a subnet label ipam/SVC-k8s-lcm

Kubernetes workloads network

k8s-pods 1

By a static interface name

Managed cluster

Network type

Bridge name

Assignment method

OOB network

N/A

N/A

PXE network

N/A

N/A

LCM network

k8s-lcm 2

By a subnet label ipam/SVC-k8s-lcm

Kubernetes workloads network

k8s-pods 1

By a static interface name

Kubernetes external network

k8s-ext

By a static interface name

Storage access (public) network

ceph-public

By the subnet label ipam/SVC-ceph-public

Storage replication (cluster) network

ceph-cluster

By the subnet label ipam/SVC-ceph-cluster

0(1,2)

Ceph network configuration reference

1(1,2)

Interface name for this network role is static and cannot be changed.

2(1,2)

Use of this interface name (and network role) is mandatory for every cluster.

Storage

The baremetal-based Mirantis Container Cloud uses Ceph as a distributed storage system for file, block, and object storage. This section provides an overview of a Ceph cluster deployed by Container Cloud.

Overview

Mirantis Container Cloud deploys Ceph on baremetal-based managed clusters using Helm charts with the following components:

Rook Ceph Operator

A storage orchestrator that deploys Ceph on top of a Kubernetes cluster. Also known as Rook or Rook Operator. Rook operations include:

  • Deploying and managing a Ceph cluster based on provided Rook CRs such as CephCluster, CephBlockPool, CephObjectStore, and so on.

  • Orchestrating the state of the Ceph cluster and all its daemons.

KaaSCephCluster custom resource (CR)

Represents the customization of a Kubernetes installation and allows you to define the required Ceph configuration through the Container Cloud web UI before deployment. For example, you can define the failure domain, Ceph pools, Ceph node roles, number of Ceph components such as Ceph OSDs, and so on. The ceph-kcc-controller controller on the Container Cloud management cluster manages the KaaSCephCluster CR.

Ceph Controller

A Kubernetes controller that obtains the parameters from Container Cloud through a CR, creates CRs for Rook and updates its CR status based on the Ceph cluster deployment progress. It creates users, pools, and keys for OpenStack and Kubernetes and provides Ceph configurations and keys to access them. Also, Ceph Controller eventually obtains the data from the OpenStack Controller for the Keystone integration and updates the RADOS Gateway services configurations to use Kubernetes for user authentication. Ceph Controller operations include:

  • Transforming user parameters from the Container Cloud Ceph CR into Rook CRs and deploying a Ceph cluster using Rook.

  • Providing integration of the Ceph cluster with Kubernetes.

  • Providing data for OpenStack to integrate with the deployed Ceph cluster.

Ceph Status Controller

A Kubernetes controller that collects all valuable parameters from the current Ceph cluster, its daemons, and entities and exposes them into the KaaSCephCluster status. Ceph Status Controller operations include:

  • Collecting all statuses from a Ceph cluster and corresponding Rook CRs.

  • Collecting additional information on the health of Ceph daemons.

  • Provides information to the status section of the KaaSCephCluster CR.

Ceph Request Controller

A Kubernetes controller that obtains the parameters from Container Cloud through a CR and manages Ceph OSD lifecycle management (LCM) operations. It allows for a safe Ceph OSD removal from the Ceph cluster. Ceph Request Controller operations include:

  • Providing an ability to perform Ceph OSD LCM operations.

  • Obtaining specific CRs to remove Ceph OSDs and executing them.

  • Pausing the regular Ceph Controller reconcile until all requests are completed.

A typical Ceph cluster consists of the following components:

  • Ceph Monitors - three or, in rare cases, five Ceph Monitors.

  • Ceph Managers:

    • Before Container Cloud 2.22.0, one Ceph Manager.

    • Since Container Cloud 2.22.0, two Ceph Managers.

  • RADOS Gateway services - Mirantis recommends having three or more RADOS Gateway instances for HA.

  • Ceph OSDs - the number of Ceph OSDs may vary according to the deployment needs.

    Warning

    • A Ceph cluster with 3 Ceph nodes does not provide hardware fault tolerance and is not eligible for recovery operations, such as a disk or an entire Ceph node replacement.

    • A Ceph cluster uses the replication factor that equals 3. If the number of Ceph OSDs is less than 3, a Ceph cluster moves to the degraded state with the write operations restriction until the number of alive Ceph OSDs equals the replication factor again.

The placement of Ceph Monitors and Ceph Managers is defined in the KaaSCephCluster CR.

The following diagram illustrates the way a Ceph cluster is deployed in Container Cloud:

_images/ceph-deployment.png

The following diagram illustrates the processes within a deployed Ceph cluster:

_images/ceph-data-flow.png
Limitations

A Ceph cluster configuration in Mirantis Container Cloud includes but is not limited to the following limitations:

  • Only one Ceph Controller per a managed cluster and only one Ceph cluster per Ceph Controller are supported.

  • The replication size for any Ceph pool must be set to more than 1.

  • All CRUSH rules must have the same failure_domain.

  • Only one CRUSH tree per cluster. The separation of devices per Ceph pool is supported through device classes with only one pool of each type for a device class.

  • Only the following types of CRUSH buckets are supported:

    • topology.kubernetes.io/region

    • topology.kubernetes.io/zone

    • topology.rook.io/datacenter

    • topology.rook.io/room

    • topology.rook.io/pod

    • topology.rook.io/pdu

    • topology.rook.io/row

    • topology.rook.io/rack

    • topology.rook.io/chassis

  • Only IPv4 is supported.

  • If two or more Ceph OSDs are located on the same device, there must be no dedicated WAL or DB for this class.

  • Only a full collocation or dedicated WAL and DB configurations are supported.

  • The minimum size of any defined Ceph OSD device is 5 GB.

  • Lifted since Container Cloud 2.24.2 (Cluster releases 14.0.1 and 15.0.1). Ceph cluster does not support removable devices (with hotplug enabled) for deploying Ceph OSDs.

  • Ceph OSDs support only raw disks as data devices meaning that no dm or lvm devices are allowed.

  • When adding a Ceph node with the Ceph Monitor role, if any issues occur with the Ceph Monitor, rook-ceph removes it and adds a new Ceph Monitor instead, named using the next alphabetic character in order. Therefore, the Ceph Monitor names may not follow the alphabetical order. For example, a, b, d, instead of a, b, c.

  • Reducing the number of Ceph Monitors is not supported and causes the Ceph Monitor daemons removal from random nodes.

  • Removal of the mgr role in the nodes section of the KaaSCephCluster CR does not remove Ceph Managers. To remove a Ceph Manager from a node, remove it from the nodes spec and manually delete the mgr pod in the Rook namespace.

  • Lifted since Container Cloud 2.26.0 (Cluster releases 17.1.0 and 16.1.10). Ceph does not support allocation of Ceph RGW pods on nodes where the Federal Information Processing Standard (FIPS) mode is enabled.

Addressing storage devices

There are several formats to use when specifying and addressing storage devices of a Ceph cluster. The default and recommended one is the /dev/disk/by-id format. This format is reliable and unaffected by the disk controller actions, such as device name shuffling or /dev/disk/by-path recalculating.

Difference between by-id, name, and by-path formats

The storage device /dev/disk/by-id format in most of the cases bases on a disk serial number, which is unique for each disk. A by-id symlink is created by the udev rules in the following format, where <BusID> is an ID of the bus to which the disk is attached and <DiskSerialNumber> stands for a unique disk serial number:

/dev/disk/by-id/<BusID>-<DiskSerialNumber>

Typical by-id symlinks for storage devices look as follows:

/dev/disk/by-id/nvme-SAMSUNG_MZ1LB3T8HMLA-00007_S46FNY0R394543
/dev/disk/by-id/scsi-SATA_HGST_HUS724040AL_PN1334PEHN18ZS
/dev/disk/by-id/ata-WDC_WD4003FZEX-00Z4SA0_WD-WMC5D0D9DMEH

In the example above, symlinks contain the following IDs:

  • Bus IDs: nvme, scsi-SATA and ata

  • Disk serial numbers: SAMSUNG_MZ1LB3T8HMLA-00007_S46FNY0R394543, HGST_HUS724040AL_PN1334PEHN18ZS and WDC_WD4003FZEX-00Z4SA0_WD-WMC5D0D9DMEH.

An exception to this rule is the wwn by-id symlinks, which are programmatically generated at boot. They are not solely based on disk serial numbers but also include other node information. This can lead to the wwn being recalculated when the node reboots. As a result, this symlink type cannot guarantee a persistent disk identifier and should not be used as a stable storage device symlink in a Ceph cluster.

The storage device name and by-path formats cannot be considered persistent because the sequence in which block devices are added during boot is semi-arbitrary. This means that block device names, for example, nvme0n1 and sdc, are assigned to physical disks during discovery, which may vary inconsistently from the previous node state. The same inconsistency applies to by-path symlinks, as they rely on the shortest physical path to the device at boot and may differ from the previous node state.

Therefore, Mirantis highly recommends using storage device by-id symlinks that contain disk serial numbers. This approach enables you to use a persistent device identifier addressed in the Ceph cluster specification.

Example KaaSCephCluster with device by-id identifiers

Below is an example KaaSCephCluster custom resource using the /dev/disk/by-id format for storage devices specification:

Note

Container Cloud enables you to use fullPath for the by-id symlinks since 2.25.0. For the earlier product versions, use the name field instead.

 apiVersion: kaas.mirantis.com/v1alpha1
 kind: KaaSCephCluster
 metadata:
   name: ceph-cluster-managed-cluster
   namespace: managed-ns
 spec:
   cephClusterSpec:
     nodes:
       # Add the exact ``nodes`` names.
       # Obtain the name from the "get machine" list.
       cz812-managed-cluster-storage-worker-noefi-58spl:
         roles:
         - mgr
         - mon
       # All disk configuration must be reflected in ``status.providerStatus.hardware.storage`` of the ``Machine`` object
         storageDevices:
         - config:
             deviceClass: ssd
           fullPath: /dev/disk/by-id/scsi-1ATA_WDC_WDS100T2B0A-00SM50_200231440912
       cz813-managed-cluster-storage-worker-noefi-lr4k4:
         roles:
         - mgr
         - mon
         storageDevices:
         - config:
             deviceClass: nvme
           fullPath: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB3T8HMLA-00007_S46FNY0R394543
       cz814-managed-cluster-storage-worker-noefi-z2m67:
         roles:
         - mgr
         - mon
         storageDevices:
         - config:
             deviceClass: nvme
           fullPath: /dev/disk/by-id/nvme-SAMSUNG_ML1EB3T8HMLA-00007_S46FNY1R130423
     pools:
     - default: true
       deviceClass: ssd
       name: kubernetes
       replicated:
         size: 3
       role: kubernetes
   k8sCluster:
     name: managed-cluster
     namespace: managed-ns
Extended hardware configuration

Mirantis Container Cloud provides APIs that enable you to define hardware configurations that extend the reference architecture:

  • Bare Metal Host Profile API

    Enables for quick configuration of host boot and storage devices and assigning of custom configuration profiles to individual machines. See Create a custom bare metal host profile.

  • IP Address Management API

    Enables for quick configuration of host network interfaces and IP addresses and setting up of IP addresses ranges for automatic allocation. See Create L2 templates.

Typically, operations with the extended hardware configurations are available through the API and CLI, but not the web UI.

Automatic upgrade of a host operating system

To keep operating system on a bare metal host up to date with the latest security updates, the operating system requires periodic software packages upgrade that may or may not require the host reboot.

Mirantis Container Cloud uses life cycle management tools to update the operating system packages on the bare metal hosts. Container Cloud may also trigger restart of bare metal hosts to apply the updates.

In the management cluster of Container Cloud, software package upgrade and host restart is applied automatically when a new Container Cloud version with available kernel or software packages upgrade is released.

In managed clusters, package upgrade and host restart is applied as part of usual cluster upgrade using the Update cluster option in the Container Cloud web UI.

Operating system upgrade and host restart are applied to cluster nodes one by one. If Ceph is installed in the cluster, the Container Cloud orchestration securely pauses the Ceph OSDs on the node before restart. This allows avoiding degradation of the storage service.

Caution

  • Depending on the cluster configuration, applying security updates and host restart can increase the update time for each node to up to 1 hour.

  • Cluster nodes are updated one by one. Therefore, for large clusters, the update may take several days to complete.

Built-in load balancing

The Mirantis Container Cloud managed clusters use MetalLB for load balancing of services and HAProxy with VIP managed by Virtual Router Redundancy Protocol (VRRP) with Keepalived for the Kubernetes API load balancer.

Kubernetes API load balancing

Every control plane node of each Kubernetes cluster runs the kube-api service in a container. This service provides a Kubernetes API endpoint. Every control plane node also runs the haproxy server that provides load balancing with backend health checking for all kube-api endpoints as backends.

The default load balancing method is least_conn. With this method, a request is sent to the server with the least number of active connections. The default load balancing method cannot be changed using the Container Cloud API.

Only one of the control plane nodes at any given time serves as a front end for Kubernetes API. To ensure this, the Kubernetes clients use a virtual IP (VIP) address for accessing Kubernetes API. This VIP is assigned to one node at a time using VRRP. Keepalived running on each control plane node provides health checking and failover of the VIP.

Keepalived is configured in multicast mode.

Note

The use of VIP address for load balancing of Kubernetes API requires that all control plane nodes of a Kubernetes cluster are connected to a shared L2 segment. This limitation prevents from installing full L3 topologies where control plane nodes are split between different L2 segments and L3 networks.

Services load balancing

The services provided by the Kubernetes clusters, including Container Cloud and user services, are balanced by MetalLB. The metallb-speaker service runs on every worker node in the cluster and handles connections to the service IP addresses.

MetalLB runs in the MAC-based (L2) mode. It means that all control plane nodes must be connected to a shared L2 segment. This is a limitation that does not allow installing full L3 cluster topologies.

Kubernetes lifecycle management

The Kubernetes lifecycle management (LCM) engine in Mirantis Container Cloud consists of the following components:

LCM Controller

Responsible for all LCM operations. Consumes the LCMCluster object and orchestrates actions through LCM Agent.

LCM Agent

Runs on the target host. Executes Ansible playbooks in headless mode.

Helm Controller

Responsible for the Helm charts life cycle, is installed by the provider as a Helm v3 chart.

The Kubernetes LCM components handle the following custom resources:

  • LCMCluster

  • LCMMachine

  • HelmBundle

The following diagram illustrates handling of the LCM custom resources by the Kubernetes LCM components. On a managed cluster, apiserver handles multiple Kubernetes objects, for example, deployments, nodes, RBAC, and so on.

_images/lcm-components.png
LCM custom resources

The Kubernetes LCM components handle the following custom resources (CRs):

  • LCMMachine

  • LCMCluster

  • HelmBundle

LCMMachine

Describes a machine that is located on a cluster. It contains the machine type, control or worker, StateItems that correspond to Ansible playbooks and miscellaneous actions, for example, downloading a file or executing a shell command. LCMMachine reflects the current state of the machine, for example, a node IP address, and each StateItem through its status. Multiple LCMMachine CRs can correspond to a single cluster.

LCMCluster

Describes a managed cluster. In its spec, LCMCluster contains a set of StateItems for each type of LCMMachine, which describe the actions that must be performed to deploy the cluster. LCMCluster is created by the provider, using machineTypes of the Release object. The status field of LCMCluster reflects the status of the cluster, for example, the number of ready or requested nodes.

HelmBundle

Wrapper for Helm charts that is handled by Helm Controller. HelmBundle tracks what Helm charts must be installed on a managed cluster.

LCM Controller

LCM Controller runs on the management cluster and orchestrates the LCMMachine objects according to their type and their LCMCluster object.

Once the LCMCluster and LCMMachine objects are created, LCM Controller starts monitoring them to modify the spec fields and update the status fields of the LCMMachine objects when required. The status field of LCMMachine is updated by LCM Agent running on a node of a management or managed cluster.

Each LCMMachine has the following lifecycle states:

  1. Uninitialized - the machine is not yet assigned to an LCMCluster.

  2. Pending - the agent reports a node IP address and host name.

  3. Prepare - the machine executes StateItems that correspond to the prepare phase. This phase usually involves downloading the necessary archives and packages.

  4. Deploy - the machine executes StateItems that correspond to the deploy phase that is becoming a Mirantis Kubernetes Engine (MKE) node.

  5. Ready - the machine is being deployed.

  6. Upgrade - the machine is being upgraded to the new MKE version.

  7. Reconfigure - the machine executes StateItems that correspond to the reconfigure phase. The machine configuration is being updated without affecting workloads running on the machine.

The templates for StateItems are stored in the machineTypes field of an LCMCluster object, with separate lists for the MKE manager and worker nodes. Each StateItem has the execution phase field for a management and managed cluster:

  1. The prepare phase is executed for all machines for which it was not executed yet. This phase comprises downloading the files necessary for the cluster deployment, installing the required packages, and so on.

  2. During the deploy phase, a node is added to the cluster. LCM Controller applies the deploy phase to the nodes in the following order:

    1. First manager node is deployed.

    2. The remaining manager nodes are deployed one by one and the worker nodes are deployed in batches (by default, up to 50 worker nodes at the same time).

LCM Controller deploys and upgrades a Mirantis Container Cloud cluster by setting StateItems of LCMMachine objects following the corresponding StateItems phases described above. The Container Cloud cluster upgrade process follows the same logic that is used for a new deployment, that is applying a new set of StateItems to the LCMMachines after updating the LCMCluster object. But if the existing worker node is being upgraded, LCM Controller performs draining and cordoning on this node honoring the Pod Disruption Budgets. This operation prevents unexpected disruptions of the workloads.

LCM Agent

LCM Agent handles a single machine that belongs to a management or managed cluster. It runs on the machine operating system but communicates with apiserver of the management cluster. LCM Agent is deployed as a systemd unit using cloud-init. LCM Agent has a built-in self-upgrade mechanism.

LCM Agent monitors the spec of a particular LCMMachine object to reconcile the machine state with the object StateItems and update the LCMMachine status accordingly. The actions that LCM Agent performs while handling the StateItems are as follows:

  • Download configuration files

  • Run shell commands

  • Run Ansible playbooks in headless mode

LCM Agent provides the IP address and host name of the machine for the LCMMachine status parameter.

Helm Controller

Helm Controller is used by Mirantis Container Cloud to handle management and managed clusters core addons such as StackLight and the application addons such as the OpenStack components.

Helm Controller is installed as a separate Helm v3 chart by the Container Cloud provider. Its Pods are created using Deployment.

The Helm release information is stored in the KaaSRelease object for the management clusters and in the ClusterRelease object for all types of the Container Cloud clusters. These objects are used by the Container Cloud provider. The Container Cloud provider uses the information from the ClusterRelease object together with the Container Cloud API Cluster spec. In Cluster spec, the operator can specify the Helm release name and charts to use. By combining the information from the Cluster providerSpec parameter and its ClusterRelease object, the cluster actuator generates the LCMCluster objects. These objects are further handled by LCM Controller and the HelmBundle object handled by Helm Controller. HelmBundle must have the same name as the LCMCluster object for the cluster that HelmBundle applies to.

Although a cluster actuator can only create a single HelmBundle per cluster, Helm Controller can handle multiple HelmBundle objects per cluster.

Helm Controller handles the HelmBundle objects and reconciles them with the state of Helm in its cluster.

Helm Controller can also be used by the management cluster with corresponding HelmBundle objects created as part of the initial management cluster setup.

Identity and access management

Identity and access management (IAM) provides a central point of users and permissions management of the Mirantis Container Cloud cluster resources in a granular and unified manner. Also, IAM provides infrastructure for single sign-on user experience across all Container Cloud web portals.

IAM for Container Cloud consists of the following components:

Keycloak
  • Provides the OpenID Connect endpoint

  • Integrates with an external identity provider (IdP), for example, existing LDAP or Google Open Authorization (OAuth)

  • Stores roles mapping for users

IAM Controller
  • Provides IAM API with data about Container Cloud projects

  • Handles all role-based access control (RBAC) components in Kubernetes API

IAM API

Provides an abstraction API for creating user scopes and roles

External identity provider integration

To be consistent and keep the integrity of a user database and user permissions, in Mirantis Container Cloud, IAM stores the user identity information internally. However in real deployments, the identity provider usually already exists.

Out of the box, in Container Cloud, IAM supports integration with LDAP and Google Open Authorization (OAuth). If LDAP is configured as an external identity provider, IAM performs one-way synchronization by mapping attributes according to configuration.

In the case of the Google Open Authorization (OAuth) integration, the user is automatically registered and their credentials are stored in the internal database according to the user template configuration. The Google OAuth registration workflow is as follows:

  1. The user requests a Container Cloud web UI resource.

  2. The user is redirected to the IAM login page and logs in using the Log in with Google account option.

  3. IAM creates a new user with the default access rights that are defined in the user template configuration.

  4. The user can access the Container Cloud web UI resource.

The following diagram illustrates the external IdP integration to IAM:

_images/iam-ext-idp.png

You can configure simultaneous integration with both external IdPs with the user identity matching feature enabled.

Authentication and authorization

Mirantis IAM uses the OpenID Connect (OIDC) protocol for handling authentication.

Implementation flow

Mirantis IAM performs as an OpenID Connect (OIDC) provider, it issues a token and exposes discovery endpoints.

The credentials can be handled by IAM itself or delegated to an external identity provider (IdP).

The issued JSON Web Token (JWT) is sufficient to perform operations across Mirantis Container Cloud according to the scope and role defined in it. Mirantis recommends using asymmetric cryptography for token signing (RS256) to minimize the dependency between IAM and managed components.

When Container Cloud calls Mirantis Kubernetes Engine (MKE), the user in Keycloak is created automatically with a JWT issued by Keycloak on behalf of the end user. MKE, in its turn, verifies whether the JWT is issued by Keycloak. If the user retrieved from the token does not exist in the MKE database, the user is automatically created in the MKE database based on the information from the token.

The authorization implementation is out of the scope of IAM in Container Cloud. This functionality is delegated to the component level. IAM interacts with a Container Cloud component using the OIDC token content that is processed by a component itself and required authorization is enforced. Such an approach enables you to have any underlying authorization that is not dependent on IAM and still to provide a unified user experience across all Container Cloud components.

Kubernetes CLI authentication flow

The following diagram illustrates the Kubernetes CLI authentication flow. The authentication flow for Helm and other Kubernetes-oriented CLI utilities is identical to the Kubernetes CLI flow, but JSON Web Tokens (JWT) must be pre-provisioned.

_images/iam-authn-k8s.png

See also

IAM resources

Monitoring

Mirantis Container Cloud uses StackLight, the logging, monitoring, and alerting solution that provides a single pane of glass for cloud maintenance and day-to-day operations as well as offers critical insights into cloud health including operational information about the components deployed in management and managed clusters.

StackLight is based on Prometheus, an open-source monitoring solution and a time series database.

Deployment architecture

Mirantis Container Cloud deploys the StackLight stack as a release of a Helm chart that contains the helm-controller and helmbundles.lcm.mirantis.com (HelmBundle) custom resources. The StackLight HelmBundle consists of a set of Helm charts with the StackLight components that include:

StackLight components overview

StackLight component

Description

Alerta

Receives, consolidates, and deduplicates the alerts sent by Alertmanager and visually represents them through a simple web UI. Using the Alerta web UI, you can view the most recent or watched alerts, group, and filter alerts.

Alertmanager

Handles the alerts sent by client applications such as Prometheus, deduplicates, groups, and routes alerts to receiver integrations. Using the Alertmanager web UI, you can view the most recent fired alerts, silence them, or view the Alertmanager configuration.

Elasticsearch Curator

Maintains the data (indexes) in OpenSearch by performing such operations as creating, closing, or opening an index as well as deleting a snapshot. Also, manages the data retention policy in OpenSearch.

Elasticsearch Exporter Compatible with OpenSearch

The Prometheus exporter that gathers internal OpenSearch metrics.

Grafana

Builds and visually represents metric graphs based on time series databases. Grafana supports querying of Prometheus using the PromQL language.

Database backends

StackLight uses PostgreSQL for Alerta and Grafana. PostgreSQL reduces the data storage fragmentation while enabling high availability. High availability is achieved using Patroni, the PostgreSQL cluster manager that monitors for node failures and manages failover of the primary node. StackLight also uses Patroni to manage major version upgrades of PostgreSQL clusters, which allows leveraging the database engine functionality and improvements as they are introduced upstream in new releases, maintaining functional continuity without version lock-in.

Logging stack

Responsible for collecting, processing, and persisting logs and Kubernetes events. By default, when deploying through the Container Cloud web UI, only the metrics stack is enabled on managed clusters. To enable StackLight to gather managed cluster logs, enable the logging stack during deployment. On management clusters, the logging stack is enabled by default. The logging stack components include:

  • OpenSearch, which stores logs and notifications.

  • Fluentd-logs, which collects logs, sends them to OpenSearch, generates metrics based on analysis of incoming log entries, and exposes these metrics to Prometheus.

  • OpenSearch Dashboards, which provides real-time visualization of the data stored in OpenSearch and enables you to detect issues.

  • Metricbeat, which collects Kubernetes events and sends them to OpenSearch for storage.

  • Prometheus-es-exporter, which presents the OpenSearch data as Prometheus metrics by periodically sending configured queries to the OpenSearch cluster and exposing the results to a scrapable HTTP endpoint like other Prometheus targets.

Note

The logging mechanism performance depends on the cluster log load. In case of a high load, you may need to increase the default resource requests and limits for fluentdLogs. For details, see StackLight configuration parameters: Resource limits.

Metric collector

Collects telemetry data (CPU or memory usage, number of active alerts, and so on) from Prometheus and sends the data to centralized cloud storage for further processing and analysis. Metric collector runs on the management cluster.

Note

This component is designated for internal StackLight use only.

Prometheus

Gathers metrics. Automatically discovers and monitors the endpoints. Using the Prometheus web UI, you can view simple visualizations and debug. By default, the Prometheus database stores metrics of the past 15 days or up to 15 GB of data depending on the limit that is reached first.

Prometheus Blackbox Exporter

Allows monitoring endpoints over HTTP, HTTPS, DNS, TCP, and ICMP.

Prometheus-es-exporter

Presents the OpenSearch data as Prometheus metrics by periodically sending configured queries to the OpenSearch cluster and exposing the results to a scrapable HTTP endpoint like other Prometheus targets.

Prometheus Node Exporter

Gathers hardware and operating system metrics exposed by kernel.

Prometheus Relay

Adds a proxy layer to Prometheus to merge the results from underlay Prometheus servers to prevent gaps in case some data is missing on some servers. Is available only in the HA StackLight mode.

Salesforce notifier

Enables sending Alertmanager notifications to Salesforce to allow creating Salesforce cases and closing them once the alerts are resolved. Disabled by default.

Salesforce reporter

Queries Prometheus for the data about the amount of vCPU, vRAM, and vStorage used and available, combines the data, and sends it to Salesforce daily. Mirantis uses the collected data for further analysis and reports to improve the quality of customer support. Disabled by default.

Telegraf

Collects metrics from the system. Telegraf is plugin-driven and has the concept of two distinct set of plugins: input plugins collect metrics from the system, services, or third-party APIs; output plugins write and expose metrics to various destinations.

The Telegraf agents used in Container Cloud include:

  • telegraf-ds-smart monitors SMART disks, and runs on both management and managed clusters.

  • telegraf-ironic monitors Ironic on the baremetal-based management clusters. The ironic input plugin collects and processes data from Ironic HTTP API, while the http_response input plugin checks Ironic HTTP API availability. As an output plugin, to expose collected data as Prometheus target, Telegraf uses prometheus.

  • telegraf-docker-swarm gathers metrics from the Mirantis Container Runtime API about the Docker nodes, networks, and Swarm services. This is a Docker Telegraf input plugin with downstream additions.

Telemeter

Enables a multi-cluster view through a Grafana dashboard of the management cluster. Telemeter includes a Prometheus federation push server and clients to enable isolated Prometheus instances, which cannot be scraped from a central Prometheus instance, to push metrics to the central location.

The Telemeter services are distributed between the management cluster that hosts the Telemeter server and managed clusters that host the Telemeter client. The metrics from managed clusters are aggregated on management clusters.

Note

This component is designated for internal StackLight use only.

Every Helm chart contains a default values.yml file. These default values are partially overridden by custom values defined in the StackLight Helm chart.

Before deploying a managed cluster, you can select the HA or non-HA StackLight architecture type. The non-HA mode is set by default on managed clusters. On management clusters, StackLight is deployed in the HA mode only. The following table lists the differences between the HA and non-HA modes:

StackLight database modes

Non-HA StackLight mode default

HA StackLight mode

  • One Prometheus instance

  • One Alertmanager instance Since 2.24.0 and 2.24.2 for MOSK 23.2

  • One OpenSearch instance

  • One PostgreSQL instance

  • One iam-proxy instance

One persistent volume is provided for storing data. In case of a service or node failure, a new pod is redeployed and the volume is reattached to provide the existing data. Such setup has a reduced hardware footprint but provides less performance.

  • Two Prometheus instances

  • Two Alertmanager instances

  • Three OpenSearch instances

  • Three PostgreSQL instances

  • Two iam-proxy instances Since 2.23.0 and 2.23.1 for MOSK 23.1

Local Volume Provisioner is used to provide local host storage. In case of a service or node failure, the traffic is automatically redirected to any other running Prometheus or OpenSearch server. For better performance, Mirantis recommends that you deploy StackLight in the HA mode. Two iam-proxy instances ensure access to HA components if one iam-proxy node fails.

Note

Before Container Cloud 2.24.0, Alertmanager has 2 replicas in the non-HA mode.

Caution

Non-HA StackLight requires a backend storage provider, for example, a Ceph cluster. For details, see Storage.

Depending on the Container Cloud cluster type and selected StackLight database mode, StackLight is deployed on the following number of nodes:

StackLight database modes

Cluster

StackLight database mode

Target nodes

Management

HA mode

All Kubernetes master nodes

Managed

Non-HA mode

  • All nodes with the stacklight label.

  • If no nodes have the stacklight label, StackLight is spread across all worker nodes. The minimal requirement is at least 1 worker node.

HA mode

All nodes with the stacklight label. The minimal requirement is 3 nodes with the stacklight label. Otherwise, StackLight deployment does not start.

Authentication flow

StackLight provides five web UIs including Prometheus, Alertmanager, Alerta, OpenSearch Dashboards, and Grafana. Access to StackLight web UIs is protected by Keycloak-based Identity and access management (IAM). All web UIs except Alerta are exposed to IAM through the IAM proxy middleware. The Alerta configuration provides direct integration with IAM.

The following diagram illustrates accessing the IAM-proxied StackLight web UIs, for example, Prometheus web UI:

_images/sl-auth-iam-proxied.png

Authentication flow for the IAM-proxied StackLight web UIs:

  1. A user enters the public IP of a StackLight web UI, for example, Prometheus web UI.

  2. The public IP leads to IAM proxy, deployed as a Kubernetes LoadBalancer, which protects the Prometheus web UI.

  3. LoadBalancer routes the HTTP request to Kubernetes internal IAM proxy service endpoints, specified in the X-Forwarded-Proto or X-Forwarded-Host headers.

  4. The Keycloak login form opens (the login_url field in the IAM proxy configuration, which points to Keycloak realm) and the user enters the user name and password.

  5. Keycloak validates the user name and password.

  6. The user obtains access to the Prometheus web UI (the upstreams field in the IAM proxy configuration).

Note

  • The discovery URL is the URL of the IAM service.

  • The upstream URL is the hidden endpoint of a web UI (Prometheus web UI in the example above).

The following diagram illustrates accessing the Alerta web UI:

_images/sl-authentication-direct.png

Authentication flow for the Alerta web UI:

  1. A user enters the public IP of the Alerta web UI.

  2. The public IP leads to Alerta deployed as a Kubernetes LoadBalancer type.

  3. LoadBalancer routes the HTTP request to the Kubernetes internal Alerta service endpoint.

  4. The Keycloak login form opens (Alerta refers to the IAM realm) and the user enters the user name and password.

  5. Keycloak validates the user name and password.

  6. The user obtains access to the Alerta web UI.

Supported features

Using the Mirantis Container Cloud web UI, on the pre-deployment stage of a managed cluster, you can view, enable or disable, or tune the following StackLight features available:

  • StackLight HA mode.

  • Database retention size and time for Prometheus.

  • Tunable index retention period for OpenSearch.

  • Tunable PersistentVolumeClaim (PVC) size for Prometheus and OpenSearch set to 16 GB for Prometheus and 30 GB for OpenSearch by default. The PVC size must be logically aligned with the retention periods or sizes for these components.

  • Email and Slack receivers for the Alertmanager notifications.

  • Predefined set of dashboards.

  • Predefined set of alerts and capability to add new custom alerts for Prometheus in the following exemplary format:

    - alert: HighErrorRate
      expr: job:request_latency_seconds:mean5m{job="myjob"} > 0.5
      for: 10m
      labels:
        severity: page
      annotations:
        summary: High request latency
    
Monitored components

StackLight measures, analyzes, and reports in a timely manner about failures that may occur in the following Mirantis Container Cloud components and their sub-components, if any:

  • Ceph

  • Ironic

  • Kubernetes services:

    • Calico

    • etcd

    • Kubernetes cluster

    • Kubernetes containers

    • Kubernetes deployments

    • Kubernetes nodes

  • NGINX

  • Node hardware and operating system

  • PostgreSQL

  • StackLight:

    • Alertmanager

    • OpenSearch

    • Grafana

    • Prometheus

    • Prometheus Relay

    • Salesforce notifier

    • Telemeter

  • SSL certificates

  • Mirantis Kubernetes Engine (MKE)

    • Docker/Swarm metrics (through Telegraf)

    • Built-in MKE metrics

Storage-based log retention strategy

Available since 2.26.0 (17.1.0 and 16.1.0)

StackLight uses a storage-based log retention strategy that optimizes storage utilization and ensures effective data retention. A proportion of available disk space is defined as 80% of disk space allocated for the OpenSearch node with the following data types:

  • 80% for system logs

  • 10% for audit logs

  • 5% for OpenStack notifications (applies only to MOSK clusters)

  • 5% for Kubernetes events

This approach ensures that storage resources are efficiently allocated based on the importance and volume of different data types.

The logging index management implies the following advantages:

  • Storage-based rollover mechanism

    The rollover mechanism for system and audit indices enforces shard size based on available storage, ensuring optimal resource utilization.

  • Consistent shard allocation

    The number of primary shards per index is dynamically set based on cluster size, which boosts search and facilitates ingestion for large clusters.

  • Minimal size of cluster state

    The logging size of the cluster state is minimal and uses static mappings, which are based on Elastic Common Schema (ESC) with slight deviations from the standard. Dynamic mapping in index templates is avoided to reduce overhead.

  • Storage compression

    The system and audit indices utilize the best_compression codec that minimizes the size of stored indices, resulting in significant storage savings of up to 50% on average.

  • No filter by logging level

    In light of non-even severity level over components in Container Cloud, logs of all severity levels are collected to prevent ignorance of important logs of low severity while debugging a cluster. Filtering by tags is still available.

Outbound cluster metrics

The data collected and transmitted through an encrypted channel back to Mirantis provides our Customer Success Organization information to better understand the operational usage patterns our customers are experiencing as well as to provide feedback on product usage statistics to enable our product teams to enhance our products and services for our customers.

Mirantis collects the following statistics using configuration-collector:

Mirantis collects hardware information using the following metrics:

  • mcc_hw_machine_chassis

  • mcc_hw_machine_cpu_model

  • mcc_hw_machine_cpu_number

  • mcc_hw_machine_nics

  • mcc_hw_machine_ram

  • mcc_hw_machine_storage (storage devices and disk layout)

  • mcc_hw_machine_vendor

Mirantis collects the summary of all deployed Container Cloud configurations using the following objects, if any:

Note

The data is anonymized from all sensitive information, such as IDs, IP addresses, passwords, private keys, and so on.

  • Cluster

  • Machine

  • MCCUpgrade

  • BareMetalHost

  • BareMetalHostProfile

  • IPAMHost

  • IPAddr

  • KaaSCephCluster

  • L2Template

  • Subnet

Note

In the Cluster releases 17.0.0, 16.0.0, and 14.1.0, Mirantis does not collect any configuration summary in light of the configuration-collector refactoring.

The node-level resource data are broken down into three broad categories: Cluster, Node, and Namespace. The telemetry data tracks Allocatable, Capacity, Limits, Requests, and actual Usage of node-level resources.

Terms explanation

Term

Definition

Allocatable

On a Kubernetes Node, the amount of compute resources that are available for pods

Capacity

The total number of available resources regardless of current consumption

Limits

Constraints imposed by Administrators

Requests

The resources that a given container application is requesting

Usage

The actual usage or consumption of a given resource

The full list of the outbound data includes:

From management clusters
  • hostos_module_usage Since 2.28.0 (17.3.0, 16.3.0)

From Mirantis OpenStack for Kubernetes (MOSK) clusters
  • cluster_alerts_firing Since MOSK 23.1

  • cluster_filesystem_size_bytes

  • cluster_filesystem_usage_bytes

  • cluster_filesystem_usage_ratio

  • cluster_master_nodes_total

  • cluster_nodes_total

  • cluster_persistentvolumeclaim_requests_storage_bytes

  • cluster_total_alerts_triggered

  • cluster_capacity_cpu_cores

  • cluster_capacity_memory_bytes

  • cluster_usage_cpu_cores

  • cluster_usage_memory_bytes

  • cluster_usage_per_capacity_cpu_ratio

  • cluster_usage_per_capacity_memory_ratio

  • cluster_worker_nodes_total

  • cluster_workload_pods_total Since MOSK 23.1

  • cluster_workload_containers_total Since MOSK 23.1

  • kaas_info

  • kaas_cluster_machines_ready_total

  • kaas_cluster_machines_requested_total

  • kaas_clusters

  • kaas_cluster_updating Since MOSK 22.5

  • kaas_license_expiry

  • kaas_machines_ready

  • kaas_machines_requested

  • kubernetes_api_availability

  • mcc_cluster_update_plan_status Since MOSK 24.3 as TechPreview

  • mke_api_availability

  • mke_cluster_nodes_total

  • mke_cluster_containers_total

  • mke_cluster_vcpu_free

  • mke_cluster_vcpu_used

  • mke_cluster_vram_free

  • mke_cluster_vram_used

  • mke_cluster_vstorage_free

  • mke_cluster_vstorage_used

  • node_labels Since MOSK 23.2

  • openstack_cinder_api_latency_90

  • openstack_cinder_api_latency_99

  • openstack_cinder_api_status Removed in MOSK 24.1

  • openstack_cinder_availability

  • openstack_cinder_volumes_total

  • openstack_glance_api_status

  • openstack_glance_availability

  • openstack_glance_images_total

  • openstack_glance_snapshots_total Removed in MOSK 24.1

  • openstack_heat_availability

  • openstack_heat_stacks_total

  • openstack_host_aggregate_instances Removed in MOSK 23.2

  • openstack_host_aggregate_memory_used_ratio Removed in MOSK 23.2

  • openstack_host_aggregate_memory_utilisation_ratio Removed in MOSK 23.2

  • openstack_host_aggregate_cpu_utilisation_ratio Removed in MOSK 23.2

  • openstack_host_aggregate_vcpu_used_ratio Removed in MOSK 23.2

  • openstack_instance_availability

  • openstack_instance_create_end

  • openstack_instance_create_error

  • openstack_instance_create_start

  • openstack_keystone_api_latency_90

  • openstack_keystone_api_latency_99

  • openstack_keystone_api_status Removed in MOSK 24.1

  • openstack_keystone_availability

  • openstack_keystone_tenants_total

  • openstack_keystone_users_total

  • openstack_kpi_provisioning

  • openstack_lbaas_availability

  • openstack_mysql_flow_control

  • openstack_neutron_api_latency_90

  • openstack_neutron_api_latency_99

  • openstack_neutron_api_status Removed in MOSK 24.1

  • openstack_neutron_availability

  • openstack_neutron_lbaas_loadbalancers_total

  • openstack_neutron_networks_total

  • openstack_neutron_ports_total

  • openstack_neutron_routers_total

  • openstack_neutron_subnets_total

  • openstack_nova_all_compute_cpu_utilisation

  • openstack_nova_all_compute_mem_utilisation

  • openstack_nova_all_computes_total

  • openstack_nova_all_vcpus_total

  • openstack_nova_all_used_vcpus_total

  • openstack_nova_all_ram_total_gb

  • openstack_nova_all_used_ram_total_gb

  • openstack_nova_all_disk_total_gb

  • openstack_nova_all_used_disk_total_gb

  • openstack_nova_api_status Removed in MOSK 24.1

  • openstack_nova_availability

  • openstack_nova_compute_cpu_utilisation

  • openstack_nova_compute_mem_utilisation

  • openstack_nova_computes_total

  • openstack_nova_disk_total_gb

  • openstack_nova_instances_active_total

  • openstack_nova_ram_total_gb

  • openstack_nova_used_disk_total_gb

  • openstack_nova_used_ram_total_gb

  • openstack_nova_used_vcpus_total

  • openstack_nova_vcpus_total

  • openstack_public_api_status Since MOSK 22.5

  • openstack_quota_instances

  • openstack_quota_ram_gb

  • openstack_quota_vcpus

  • openstack_quota_volume_storage_gb

  • openstack_rmq_message_deriv

  • openstack_usage_instances

  • openstack_usage_ram_gb

  • openstack_usage_vcpus

  • openstack_usage_volume_storage_gb

  • osdpl_aodh_alarms Since MOSK 23.3

  • osdpl_api_success Since MOSK 24.1

  • osdpl_cinder_zone_volumes Since MOSK 23.3

  • osdpl_ironic_nodes Since MOSK 25.1

  • osdpl_manila_shares Since MOSK 24.2

  • osdpl_masakari_hosts Since MOSK 24.2

  • osdpl_neutron_availability_zone_info Since MOSK 23.3

  • osdpl_neutron_zone_routers Since MOSK 23.3

  • osdpl_nova_aggregate_hosts Since MOSK 23.3

  • osdpl_nova_audit_orphaned_allocations Since MOSK 24.3

  • osdpl_nova_availability_zone_info Since MOSK 23.3

  • osdpl_nova_availability_zone_instances Since MOSK 23.3

  • osdpl_nova_availability_zone_hosts Since MOSK 23.3

  • osdpl_version_info Since MOSK 23.3

  • tf_operator_info Since MOSK 23.3 for Tungsten Fabric

StackLight proxy

StackLight components, which require external access, automatically use the same proxy that is configured for Mirantis Container Cloud clusters. Therefore, you only need to configure proxy during deployment of your management or managed clusters. No additional actions are required to set up proxy for StackLight. For more details about implementation of proxy support in Container Cloud, see Proxy and cache support.

Note

Proxy handles only the HTTP and HTTPS traffic. Therefore, for clusters with limited or no Internet access, it is not possible to set up Alertmanager email notifications, which use SMTP, when proxy is used.

Proxy is used for the following StackLight components:

Component

Cluster type

Usage

Alertmanager

Any

As a default http_config for all HTTP-based receivers except the predefined HTTP-alerta and HTTP-salesforce. For these receivers, http_config is overridden on the receiver level.

Metric Collector

Management

To send outbound cluster metrics to Mirantis.

Salesforce notifier

Any

To send notifications to the Salesforce instance.

Salesforce reporter

Any

To send metric reports to the Salesforce instance.

Requirements

Using Mirantis Container Cloud, you can deploy a Mirantis Kubernetes Engine (MKE) cluster on bare metal that requires corresponding resources.

If you use a firewall or proxy, make sure that the bootstrap and management clusters have access to the following IP ranges and domain names required for the Container Cloud content delivery network and alerting:

  • IP ranges:

  • Domain names:

    • mirror.mirantis.com and repos.mirantis.com for packages

    • binary.mirantis.com for binaries and Helm charts

    • mirantis.azurecr.io and *.blob.core.windows.net for Docker images

    • mcc-metrics-prod-ns.servicebus.windows.net:9093 for Telemetry (port 9093 if proxy is disabled, or port 443 if proxy is enabled)

    • mirantis.my.salesforce.com and login.salesforce.com for Salesforce alerts

Note

  • Access to Salesforce is required from any Container Cloud cluster type.

  • If any additional Alertmanager notification receiver is enabled, for example, Slack, its endpoint must also be accessible from the cluster.

Caution

Regional clusters are unsupported since Container Cloud 2.25.0. Mirantis does not perform functional integration testing of the feature and the related code is removed in Container Cloud 2.26.0. If you still require this feature, contact Mirantis support for further information.

Reference hardware configuration

The following hardware configuration is used as a reference to deploy Mirantis Container Cloud with bare metal Container Cloud clusters with Mirantis Kubernetes Engine.

Reference hardware configuration for Container Cloud management and managed clusters on bare metal

Server role

Management cluster

Managed cluster

# of servers

3 1

6 2

CPU cores

Minimal: 16
Recommended: 32
Minimal: 16
Recommended: depends on workload

RAM, GB

Minimal: 64
Recommended: 256
Minimal: 64
Recommended: 128

System disk, GB 3

Minimal: SSD 1x 120
Recommended: NVME 1 x 960
Minimal: SSD 1 x 120
Recommended: NVME 1 x 960

SSD/HDD storage, GB

1 x 1900 4

2 x 1900

NICs 5

Minimal: 1 x 2-port
Recommended: 2 x 2-port
Minimal: 2 x 2-port
Recommended: depends on workload
1

Adding more than 3 nodes to a management cluster is not supported.

2

Three manager nodes for HA and three worker storage nodes for a minimal Ceph cluster.

3

A management cluster requires 2 volumes for Container Cloud (total 50 GB) and 5 volumes for StackLight (total 60 GB). A managed cluster requires 5 volumes for StackLight.

4

In total, at least 2 disks are required:

  • disk0 - minimum 120 GB for system

  • disk1 - minimum 120 GB for LocalVolumeProvisioner

For the default storage schema, see Default configuration of the host system storage

5

Only one PXE port per node is allowed. The out-of-band management (IPMI) port is not included.

System requirements for the seed node

The seed node is necessary only to deploy the management cluster. When the bootstrap is complete, the bootstrap node can be redeployed and its resources can be reused for the managed cluster workloads.

The minimum reference system requirements for a baremetal-based bootstrap seed node are as follows:

  • Basic server on Ubuntu 22.04 with the following configuration:

    • Kernel version 4.15.0-76.86 or later

    • 8 GB of RAM

    • 4 CPU

    • 10 GB of free disk space for the bootstrap cluster cache

  • No DHCP or TFTP servers on any NIC networks

  • Routable access IPMI network for the hardware servers. For more details, see Host networking.

  • Internet access for downloading of all required artifacts

Network fabric

The following diagram illustrates the physical and virtual L2 underlay networking schema for the final state of the Mirantis Container Cloud bare metal deployment.

_images/bm-cluster-physical-and-l2-networking.png

The network fabric reference configuration is a spine/leaf with 2 leaf ToR switches and one out-of-band (OOB) switch per rack.

Reference configuration uses the following switches for ToR and OOB:

  • Cisco WS-C3560E-24TD has 24 of 1 GbE ports. Used in OOB network segment.

  • Dell Force 10 S4810P has 48 of 1/10GbE ports. Used as ToR in Common/PXE network segment.

In the reference configuration, all odd interfaces from NIC0 are connected to TOR Switch 1, and all even interfaces from NIC0 are connected to TOR Switch 2. The Baseboard Management Controller (BMC) interfaces of the servers are connected to OOB Switch 1.

The following recommendations apply to all types of nodes:

  • Use the Link Aggregation Control Protocol (LACP) bonding mode with MC-LAG domains configured on leaf switches. This corresponds to the 802.3ad bond mode on hosts.

  • Use ports from different multi-port NICs when creating bonds. This makes network connections redundant if failure of a single NIC occurs.

  • Configure the ports that connect servers to the PXE network with PXE VLAN as native or untagged. On these ports, configure LACP fallback to ensure that the servers can reach DHCP server and boot over network.

DHCP range requirements for PXE

When setting up the network range for DHCP Preboot Execution Environment (PXE), keep in mind several considerations to ensure smooth server provisioning:

  • Determine the network size. For instance, if you target a concurrent provision of 50+ servers, a /24 network is recommended. This specific size is crucial as it provides sufficient scope for the DHCP server to provide unique IP addresses to each new Media Access Control (MAC) address, thereby minimizing the risk of collision.

    The concept of collision refers to the likelihood of two or more devices being assigned the same IP address. With a /24 network, the collision probability using the SDBM hash function, which is used by the DHCP server, is low. If a collision occurs, the DHCP server provides a free address using a linear lookup strategy.

  • In the context of PXE provisioning, technically, the IP address does not need to be consistent for every new DHCP request associated with the same MAC address. However, maintaining the same IP address can enhance user experience, making the /24 network size more of a recommendation than an absolute requirement.

  • For a minimal network size, it is sufficient to cover the number of concurrently provisioned servers plus one additional address (50 + 1). This calculation applies after covering any exclusions that exist in the range. You can define excludes in the corresponding field of the Subnet object. For details, see API Reference: Subnet resource.

  • When the available address space is less than the minimum described above, you will not be able to automatically provision all servers. However, you can manually provision them by combining manual IP assignment for each bare metal host with manual pauses. For these operations, use the host.dnsmasqs.metal3.io/address and baremetalhost.metal3.io/detached annotations in the BareMetalHostInventory object. For details, see Operations Guide: Manually allocate IP addresses for bare metal hosts.

  • All addresses within the specified range must remain unused before provisioning. If an IP address in-use is issued by the DHCP server to a BOOTP client, that specific client cannot complete provisioning.

Management cluster storage

The management cluster requires minimum two storage devices per node. Each device is used for different type of storage.

  • The first device is always used for boot partitions and the root file system. SSD is recommended. RAID device is not supported.

  • One storage device per server is reserved for local persistent volumes. These volumes are served by the Local Storage Static Provisioner (local-volume-provisioner) and used by many services of Container Cloud.

You can configure host storage devices using the BareMetalHostProfile resources. For details, see Customize the default bare metal host profile.

Proxy and cache support

Proxy support

If you require all Internet access to go through a proxy server for security and audit purposes, you can bootstrap management clusters using proxy. The proxy server settings consist of three standard environment variables that are set prior to the bootstrap process:

  • HTTP_PROXY

  • HTTPS_PROXY

  • NO_PROXY

These settings are not propagated to managed clusters. However, you can enable a separate proxy access on a managed cluster using the Container Cloud web UI. This proxy is intended for the end user needs and is not used for a managed cluster deployment or for access to the Mirantis resources.

Caution

Since Container Cloud uses the OpenID Connect (OIDC) protocol for IAM authentication, management clusters require a direct non-proxy access from managed clusters.

StackLight components, which require external access, automatically use the same proxy that is configured for Container Cloud clusters.

On the managed clusters with limited Internet access, a proxy is required for StackLight components that use HTTP and HTTPS and are disabled by default but need external access if enabled, for example, for the Salesforce integration and Alertmanager notifications external rules. For more details about proxy implementation in StackLight, see StackLight proxy.

For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Requirements.

After enabling proxy support on managed clusters, proxy is used for:

  • Docker traffic on managed clusters

  • StackLight

  • OpenStack on MOSK-based clusters

Warning

Any modification to the Proxy object used in any cluster, for example, changing the proxy URL, NO_PROXY values, or certificate, leads to cordon-drain and Docker restart on the cluster machines.

Artifacts caching

The Container Cloud managed clusters are deployed without direct Internet access in order to consume less Internet traffic in your cloud. The Mirantis artifacts used during managed clusters deployment are downloaded through a cache running on a management cluster. The feature is enabled by default on new managed clusters and will be automatically enabled on existing clusters during upgrade to the latest version.

Caution

IAM operations require a direct non-proxy access of a managed cluster to a management cluster.

MKE API limitations

To ensure the Mirantis Container Cloud stability in managing the Container Cloud-based Mirantis Kubernetes Engine (MKE) clusters, the following MKE API functionality is not available for the Container Cloud-based MKE clusters as compared to the MKE clusters that are deployed not by Container Cloud. Use the Container Cloud web UI or CLI for this functionality instead.

Public APIs limitations in a Container Cloud-based MKE cluster

API endpoint

Limitation

GET /swarm

Swarm Join Tokens are filtered out for all users, including admins.

PUT /api/ucp/config-toml

All requests are forbidden.

POST /nodes/{id}/update

Requests for the following changes are forbidden:

  • Change Role

  • Add or remove the com.docker.ucp.orchestrator.swarm and com.docker.ucp.orchestrator.kubernetes labels.

DELETE /nodes/{id}

All requests are forbidden.

MKE configuration management

This section describes configuration specifics of an MKE cluster deployed using Container Cloud.

MKE configuration managed by Container Cloud

Since 2.25.1 (Cluster releases 16.0.1 and 17.0.1), Container Cloud does not override changes in MKE configuration except the following list of parameters that are automatically managed by Container Cloud. These parameters are always overridden by the Container Cloud default values if modified direclty using the MKE API. For details on configuration using the MKE API, see MKE configuration managed directly by the MKE API.

However, you can manually configure a few options from this list using the Cluster object of a Container Cloud cluster. They are labeled with the superscript and contain references to the respective configuration procedures in the Comments columns of the tables.

[audit_log_configuration]

MKE parameter name

Default value in Container Cloud

Comments

level

"metadata" 0
"" 1

You can configure this option either using MKE API with no Container Cloud overrides or using the Cluster object of a Container Cloud cluster. For details, see Configure Kubernetes auditing and profiling and MKE documentation: MKE audit logging.

If configured using the Cluster object, use the same object to disable the option. Otherwise, it will be overridden by Container Cloud.

support_bundle_include_audit_logs

false

For configuration procedure, see comments above.

0

For management clusters since 2.26.0 (Cluster release 16.1.0)

1

For management and managed clusters since 2.24.3 (Cluster releases 15.0.2 and 14.0.2)

[auth]

MKE parameter name

Default value in Container Cloud

default_new_user_role

"restrictedcontrol"

backend

"managed"

samlEnabled

false

managedPasswordDisabled

false

[auth.external_identity_provider]

MKE parameter name

Default value in Container Cloud

issuer

"https://<Keycloak-external-address>/auth/realms/iam"

userServiceId

"<userServiceId>"

clientId

"kaas"

wellKnownConfigUrl

"https://<Keycloak-external-address>/auth/realms/iam/.well-known/openid-configuration"

caBundle

"<caCert>"

usernameClaim

""

httpProxy

""

httpsProxy

""

[hardening_configuration]

MKE parameter name

Default value in Container Cloud

hardening_enabled

true

limit_kernel_capabilities

true

pids_limit_int

100000

pids_limit_k8s

100000

pids_limit_swarm

100000

[scheduling_configuration]

MKE parameter name

Default value in Container Cloud

enable_admin_ucp_scheduling

true

default_node_orchestrator

kubernetes

[tracking_configuration]

MKE parameter name

Default value in Container Cloud

cluster_label

"prod"

[cluster_config]

MKE parameter name

Default value in Container Cloud

Comments

calico_ip_auto_method

interface=k8s-pods

calico_mtu

"1440"

For configuration steps, see Set the MTU size for Calico.

calico_vxlan

true

calico_vxlan_mtu

"1440"

calico_vxlan_port

"4792"

cloud_provider

""

controller_port

4443

custom_kube_api_server_flags

["--event-ttl=720h"]

Applies only to MKE on the management cluster.

custom_kube_controller_manager_flags

["--leader-elect-lease-duration=120s", "--leader-elect-renew-deadline=60s"]

custom_kube_scheduler_flags

["--leader-elect-lease-duration=120s", "--leader-elect-renew-deadline=60s"]

custom_kubelet_flags

["--serialize-image-pulls=false"]

etcd_storage_quota

""

For configuration steps, see Increase storage quota for etcd.

exclude_server_identity_headers

true

ipip_mtu

"1440"

kube_api_server_auditing

true 3
false 4

For configuration steps, see Configure Kubernetes auditing and profiling.

kube_api_server_audit_log_maxage 5

30

kube_api_server_audit_log_maxbackup 5

10

kube_api_server_audit_log_maxsize 5

10

kube_api_server_profiling_enabled

false

For configuration steps, see Configure Kubernetes auditing and profiling.

kube_apiserver_port

5443

kube_protect_kernel_defaults

true

local_volume_collection_mapping

false

manager_kube_reserved_resources

"cpu=1000m,memory=2Gi,ephemeral-storage=4Gi"

metrics_retention_time

"24h"

metrics_scrape_interval

"1m"

nodeport_range

"30000-32768"

pod_cidr

"10.233.64.0/18"

You can override this value in spec::clusterNetwork::pods::cidrBlocks: of the Cluster object.

priv_attributes_allowed_for_service_accounts 2

["hostBindMounts", "hostIPC", "hostNetwork", "hostPID", "kernelCapabilities", "privileged"]

priv_attributes_priv_attributes_service_accounts 2

["kube-system:helm-controller-sa", "kube-system:pod-garbage-collector", "stacklight:stacklight-helm-controller"]service_accounts

profiling_enabled

false

prometheus_memory_limit

"4Gi"

prometheus_memory_request

"2Gi"

secure_overlay

true

service_cluster_ip_range

"10.233.0.0/18"

You can override this value in spec::clusterNetwork::services::cidrBlocks: of the Cluster object.

swarm_port

2376

swarm_strategy

"spread"

unmanaged_cni

false

vxlan_vni

10000

worker_kube_reserved_resources

"cpu=100m,memory=300Mi,ephemeral-storage=500Mi"

2(1,2)

For priv_attributes parameters, you can add custom options on top of existing parameters using the MKE API.

3

For management clusters since 2.26.0 (Cluster release 16.1.0).

4

For management and managed clusters since 2.24.3 (Cluster releases 15.0.2 and 14.0.2).

5(1,2,3)

For management and managed clusters since 2.27.0 (Cluster releases 17.2.0 and 16.2.0). For configuration steps, see Configure Kubernetes auditing and profiling.

Note

All possible values for parameters labeled with the superscript, which you can manually configure using the Cluster object are described in MKE Operations Guide: Configuration options.

MKE configuration managed directly by the MKE API

Since 2.25.1, aside from MKE parameters described in MKE configuration managed by Container Cloud, Container Cloud does not override changes in MKE configuration that are applied directly through the MKE API. For the configuration options and procedure, see MKE documentation:

  • MKE configuration options

  • Configure an existing MKE cluster

    While using this procedure, replace the command to upload the newly edited MKE configuration file with the following one:

    curl --silent --insecure -X PUT -H "X-UCP-Allow-Restricted-API: i-solemnly-swear-i-am-up-to-no-good" -H "accept: application/toml" -H "Authorization: Bearer $AUTHTOKEN" --upload-file 'mke-config.toml' https://$MKE_HOST/api/ucp/config-toml
    

Important

Mirantis cannot guarrantee the expected behavior of the functionality configured using the MKE API as long as customer-specific configuration does not undergo testing within Container Cloud. Therefore, Mirantis recommends that you test custom MKE settings configured through the MKE API on a staging environment before applying them to production.

Deployment Guide

The subsections of this section were moved to MOSK Deployment Guide: Deploy a management cluster.

Deploy a Container Cloud management cluster

The subsections of this section were moved to MOSK Deployment Guide: Deploy a management cluster.

Introduction

This section was moved to MOSK Deployment Guide: Deploy a management cluster - Introduction.

Overview of the deployment workflow

This section was moved to MOSK Deployment Guide: Deploy a management cluster - Overview of the deployment workflow.

Set up a bootstrap cluster

This section was moved to MOSK Deployment Guide: Set up a bootstrap cluster.

Deploy a management cluster using the Container Cloud API

This section was moved to MOSK Deployment Guide: Deploy a management cluster.

Configure a bare metal deployment

The subsections of this section were moved to MOSK Deployment Guide: Deploy a management cluster - Configure a bare metal deployment.

Configure BIOS on a bare metal host

This section was moved to MOSK Deployment Guide: Configure bare metal settings - Configure BIOS on a bare metal host.

Customize the default bare metal host profile

This section was moved to MOSK Deployment Guide: Configure bare metal settings - Customize the default bare metal host profile.

Configure NIC bonding

This section was moved to MOSK Deployment Guide: Configure bare metal settings - Configure NIC bonding.

Separate PXE and management networks

This section was moved to MOSK Deployment Guide: Configure bare metal settings - Separate PXE and management networks.

Configure multiple DHCP address ranges

This section was moved to MOSK Deployment Guide: Configure bare metal settings - Configure multiple DHCP address ranges.

Enable dynamic IP allocation

This section was moved to MOSK Deployment Guide: Configure bare metal settings - Enable dynamic IP allocation.

Set a custom external IP address for the DHCP service

This section was moved to MOSK Deployment Guide: Configure bare metal settings - Set a custom external IP address for the DHCP service.

Configure optional cluster settings

This section was moved to MOSK Deployment Guide: Deploy a management cluster - Configure optional cluster settings.

Post-deployment steps

This section was moved to MOSK Deployment Guide: Deploy a management cluster - Post-deployment steps.

Troubleshooting

The subsections of this section were moved to MOSK Troubleshooting Guide: Troubleshoot a management cluster bootstrap.

Requirements for a MITM proxy

This section was moved to MOSK Deployment Guide: Requirements for a MITM proxy.

Create initial users after a management cluster bootstrap

This section was moved to MOSK Deployment Guide: Create initial users after a management cluster bootstrap.

Troubleshooting

The subsections of this section were moved to MOSK Troubleshooting Guide: Troubleshoot the bootstrap node configuration.

Troubleshoot the bootstrap node configuration

The subsections of this section were moved to MOSK Troubleshooting Guide: Troubleshoot the bootstrap node configuration.

Configure external identity provider for IAM

The subsections of this section were moved to MOSK Deployment Guide: Configure external identity provider for IAM.

Operations Guide

Mirantis Container Cloud CLI

This section was moved to MOSK documentation: Container Cloud CLI.

Create and operate managed clusters

Note

This tutorial applies only to the Container Cloud web UI users with the m:kaas:namespace@operator or m:kaas:namespace@writer access role assigned by the Infrastructure Operator. To add a bare metal host, the m:kaas@operator or m:kaas:namespace@bm-pool-operator role is required.

After you deploy the Mirantis Container Cloud management cluster, you can start creating managed clusters depending on your cloud needs.

The deployment procedure is performed using the Container Cloud web UI and comprises the following steps:

  1. Create a dedicated non-default project for managed clusters.

  2. Create and configure bare metal hosts with corresponding labels for machines such as worker, manager, or storage.

  3. Create an initial cluster configuration.

  4. Add the required amount of machines with the corresponding configuration to the managed cluster.

  5. Add a Ceph cluster.

Note

The Container Cloud web UI communicates with Keycloak to authenticate users. Keycloak is exposed using HTTPS with self-signed TLS certificates that are not trusted by web browsers.

To use your own TLS certificates for Keycloak, refer to Configure TLS certificates for cluster applications.

Create a project for managed clusters

This section was moved to MOSK Deployment Guide: Create a project for managed clusters.

Generate a kubeconfig for a managed cluster using API

This section was moved to Mirantis OpenStack for Kubernetes documentation: Getting access - Generate a kubeconfig for a cluster using API.

Create and operate a baremetal-based managed cluster

The subsections of this section were moved to Mirantis OpenStack for Kubernetes documentation: Create a managed cluster.

Add a bare metal host

The subsections of this section were moved to MOSK Deployment Guide: Add a bare metal host.

Add a bare metal host using web UI

This section was moved to MOSK Deployment Guide: Add a bare metal host using web UI.

Add a bare metal host using CLI

This section was moved to MOSK Deployment Guide: Add a bare metal host using web CLI.

Create a custom bare metal host profile

The subsections of this section were moved to MOSK Deployment Guide: Create MOSK host profiles.

Default configuration of the host system storage

This section was moved to MOSK Deployment Guide: Default configuration of the host system storage.

Wipe a device or partition

Available since 2.26.0 (17.1.0 and 16.1.0)

This section was moved to MOSK Deployment Guide: Wipe a device or partition.

Create a custom host profile

This section was moved to MOSK Deployment Guide: Create a custom host profile.

Configure Ceph disks in a host profile

This section was moved to MOSK Deployment Guide: Configure Ceph disks in a host profile.

Enable huge pages

This section was moved to MOSK Deployment Guide: Enable huge pages.

Configure RAID support

Caution

This feature is available as Technology Preview. Use such configuration for testing and evaluation purposes only. For the Technology Preview feature definition, refer to Technology Preview features.

The subsections of this section were moved to MOSK Deployment Guide: Configure RAID support.

Create an LVM software RAID level 1 (raid1)

Caution

This feature is available as Technology Preview. Use such configuration for testing and evaluation purposes only. For the Technology Preview feature definition, refer to Technology Preview features.

This section was moved to MOSK Deployment Guide: Create an LVM software RAID level 1 (raid1).

Create an mdadm software RAID level 1 (raid1)

Caution

This feature is available as Technology Preview. Use such configuration for testing and evaluation purposes only. For the Technology Preview feature definition, refer to Technology Preview features.

This section was moved to MOSK Deployment Guide: Create an mdadm software RAID level 1 (raid1).

Create an mdadm software RAID level 10 (raid10)

Technology Preview

This section was moved to MOSK Deployment Guide: Create an mdadm software RAID level 10 (raid10).

Add a managed baremetal cluster

The subsections of this section were moved to Mirantis OpenStack for Kubernetes documentation: Create a managed cluster.

Create a cluster using web UI

This section was moved to MOSK Deployment Guide: Create a cluster using web UI.

Workflow of network interface naming

This section was moved to MOSK Deployment Guide: Workflow of network interface naming.

Create subnets

The subsections of this section were moved to Mirantis OpenStack for Kubernetes documentation: Create subnets.

Service labels and their life cycle

This section was moved to MOSK Deployment Guide: Service labels and their life cycle.

MetalLB configuration guidelines for subnets

This section was moved to MOSK Deployment Guide: MetalLB configuration guidelines for subnets.

Configure MetalLB

This section was moved to MOSK Deployment Guide: Configure MetalLB.

Configure node selector for MetalLB speaker

This section was moved to MOSK Deployment Guide: Configure node selector for MetalLB speaker.

Create subnets for a managed cluster using web UI

This section was moved to MOSK Deployment Guide: Create subnets for a managed cluster using web UI.

Create subnets for a managed cluster using CLI

This section was moved to MOSK Deployment Guide: Create subnets for a managed cluster using CLI.

Automate multiple subnet creation using SubnetPool

Unsupported since 2.28.0 (17.3.0 and 16.3.0)

This section was moved to MOSK Deployment Guide: Automate multiple subnet creation using SubnetPool network interface naming.

Create L2 templates

The subsections of this section were moved to Mirantis OpenStack for Kubernetes documentation: Create L2 templates.

L2 template example with bonds and bridges

This section was moved to MOSK Deployment Guide: L2 template example with bonds and bridges.

L2 template example for automatic multiple subnet creation

Unsupported since 2.28.0 (17.3.0 and 16.3.0)

This section was moved to MOSK Deployment Guide: L2 template example for automatic multiple subnet creation.

Configure BGP announcement for cluster API LB address

This section was moved to MOSK Deployment Guide: Configure BGP announcement for cluster API LB address.

Add a machine

The subsections of this section were moved to MOSK Deployment Guide: Add a machine.

Create a machine using web UI

This section was moved to MOSK Deployment Guide: Add a machine using web UI.

Create a machine using CLI

The subsections of this section were moved to MOSK Deployment Guide: Add a machine.

Deploy a machine to a specific bare metal host

This section was moved to MOSK Deployment Guide: Deploy a machine to a specific bare metal host.

Assign L2 templates to machines

This section was moved to MOSK Deployment Guide: Assign L2 templates to machines.

Override network interfaces naming and order

This section was moved to MOSK Deployment Guide: Override network interfaces naming and order.

Manually allocate IP addresses for bare metal hosts

Available since Cluster releases 16.0.0 and 17.0.0 as TechPreview and since 16.1.0 and 17.1.0 as GA

This section was moved to MOSK Deployment Guide: Manually allocate IP addresses for bare metal hosts.

Add a Ceph cluster

The subsections of this section were moved to MOSK Deployment Guide: Add a Ceph cluster.

Add a Ceph cluster using web UI

This section was moved to MOSK Deployment Guide: Add a Ceph cluster.

Add a Ceph cluster using CLI

This section was moved to MOSK Deployment Guide: Add a Ceph cluster.

Example of a complete L2 templates configuration for cluster creation

This section was moved to MOSK documentation: Deploy a managed cluster - Example of a complete template configuration for cluster creation.

The subsections of this section were moved to Mirantis OpenStack for Kubernetes documentation: Operations Guide.

Manage an existing bare metal cluster

The subsections of this section were moved to Mirantis OpenStack for Kubernetes documentation: Bare metal operations.

Manage machines of a bare metal cluster

The subsections of this section were moved to Mirantis OpenStack for Kubernetes documentation: Bare metal operations.

Upgrade an operating system distribution

Available since 14.0.1 and 15.0.1 for MOSK 23.2

This section was moved to Mirantis OpenStack for Kubernetes documentation: Bare metal operations.

Remove old Ubuntu kernel packages

Available since 2.25.0

This section was moved to Mirantis OpenStack for Kubernetes documentation: Bare metal operations.

Modify network configuration on an existing machine

TechPreview

This section was moved to Mirantis OpenStack for Kubernetes documentation: Bare metal operations.

Change a user name and password for a bare metal host

This section was moved to Mirantis OpenStack for Kubernetes documentation: Bare metal operations.

Manage Ceph

The subsections of this section were moved to Mirantis OpenStack for Kubernetes documentation: Manage Ceph.

Ceph advanced configuration

This section was moved to Mirantis OpenStack for Kubernetes documentation: Ceph advanced configuration.

Ceph default configuration options

This section was moved to Mirantis OpenStack for Kubernetes documentation: Ceph advanced configuration.

Automated Ceph LCM

The subsections of this section were moved to MOSK documentation: Automated Ceph LCM.

High-level workflow of Ceph OSD or node removal

The subsections of this section were moved to Mirantis OpenStack for Kubernetes documentation: High-level workflow of Ceph OSD or node removal.

Creating a Ceph OSD removal request

This section was moved to Mirantis OpenStack for Kubernetes documentation: Creating a Ceph OSD removal request.

KaaSCephOperationRequest OSD removal specification

This section was moved to Mirantis OpenStack for Kubernetes documentation: KaaSCephOperationRequest OSD removal specification.

KaaSCephOperationRequest OSD removal status

This section was moved to Mirantis OpenStack for Kubernetes documentation: KaaSCephOperationRequest OSD removal status.

Add, remove, or reconfigure Ceph nodes

This section was moved to MOSK documentation: Ceph operations - Add, remove, or reconfigure Ceph nodes.

Add, remove, or reconfigure Ceph OSDs

This section was moved to MOSK documentation: Ceph operations - Add, remove, or reconfigure Ceph OSDs.

Add, remove, or reconfigure Ceph OSDs with metadata devices

This section was moved to MOSK documentation: Automated Ceph LCM - Add, remove, or reconfigure Ceph OSDs with metadata devices.

Replace a failed Ceph OSD

This section was moved to MOSK documentation: Automated Ceph LCM - Replace a failed Ceph OSD.

Replace a failed Ceph OSD with a metadata device

The subsections of this section were moved to MOSK documentation: Automated Ceph LCM - Replace a failed Ceph OSD with a metadata device.

Replace a failed Ceph OSD with a metadata device as a logical volume path

This section was moved to MOSK documentation: Automated Ceph LCM - Replace a failed Ceph OSD with a metadata device as a logical volume path.

Replace a failed Ceph OSD disk with a metadata device as a device name

This section was moved to MOSK documentation: Automated Ceph LCM - Replace a failed Ceph OSD disk with a metadata device as a device name.

Replace a failed metadata device

This section was moved to MOSK documentation: Automated Ceph LCM - Replace a failed metadata device.

Replace a failed Ceph node

This section was moved to MOSK documentation: Automated Ceph LCM - Replace a failed Ceph node.

Migrate Ceph cluster to address storage devices using by-id

This section was moved to MOSK documentation: Ceph operations - Migrate Ceph cluster to address storage devices using by-id.

Increase Ceph cluster storage size

This section was moved to MOSK documentation: Ceph operations - Increase Ceph cluster storage size.

Move a Ceph Monitor daemon to another node

This section was moved to MOSK documentation: Ceph operations - Move a Ceph Monitor daemon to another node.

Migrate a Ceph Monitor before machine replacement

This section was moved to MOSK documentation: Ceph operations - Migrate a Ceph Monitor before machine replacement.

Enable Ceph RGW Object Storage

This section was moved to MOSK documentation: Ceph operations - Enable Ceph RGW Object Storage.

Enable multisite for Ceph RGW Object Storage

This section was moved to MOSK documentation: Ceph operations - Enable multisite for Ceph RGW Object Storage.

Manage Ceph RBD or CephFS clients and RGW users

Available since 2.21.0 for non-MOSK clusters

The subsections of this section were moved to MOSK documentation: Ceph operations - Manage Ceph RBD or CephFS clients and RGW users.

Manage Ceph RBD or CephFS clients

This section was moved to MOSK documentation: Ceph operations - Manage Ceph RBD or CephFS clients.

Manage Ceph Object Storage users

This section was moved to MOSK documentation: Ceph operations - Manage Ceph Object Storage users.

Set an Amazon S3 bucket policy

The subsections of this section were moved to MOSK documentation: Ceph operations - Set an Amazon S3 bucket policy.

Create Ceph Object Storage users

This section was moved to MOSK documentation: Ceph operations Create Ceph Object Storage users.

Set a bucket policy for a Ceph Object Storage user

This section was moved to MOSK documentation: Ceph operations Set a bucket policy for a Ceph Object Storage user.

Verify Ceph

The subsections of this section were moved to Mirantis OpenStack for Kubernetes documentation: Ceph operations - Verify Ceph.

Enable Ceph tolerations and resources management

The subsections of this section were moved to Mirantis OpenStack for Kubernetes documentation: Ceph operations - Enable Ceph tolerations and resources management.

Enable Ceph multinetwork

This section was moved to Mirantis OpenStack for Kubernetes documentation: Ceph operations - Enable Ceph multinetwork.

Enable TLS for Ceph public endpoints

This section was moved to Mirantis OpenStack for Kubernetes documentation: Ceph operations - Configure Ceph Object Gateway TLS.

Enable Ceph RBD mirroring

This section was moved to Mirantis OpenStack for Kubernetes documentation: Ceph operations - Enable Ceph RBD mirroring.

Enable Ceph Shared File System (CephFS)

Available since 2.22.0 as GA

This section was moved to Mirantis OpenStack for Kubernetes documentation: Ceph operations - Enable Ceph Shared File System (CephFS).

Share Ceph across two managed clusters

TechPreview Available since 2.22.0

This section was moved to Mirantis OpenStack for Kubernetes documentation: Ceph operations - Share Ceph across two managed clusters.

Calculate target ratio for Ceph pools

This section was moved to MOSK documentation: Ceph operations - Calculate target ratio for Ceph pools.

Specify placement of Ceph cluster daemons

This section was moved to Mirantis OpenStack for Kubernetes documentation: Ceph operations - Specify placement of Ceph cluster daemons.

Migrate Ceph pools from one failure domain to another

This section was moved to Mirantis OpenStack for Kubernetes documentation: Ceph operations - Migrate Ceph pools from one failure domain to another.

Enable periodic Ceph performance testing

TechPreview

The subsections of this section were moved to Mirantis OpenStack for Kubernetes documentation: Enable periodic Ceph performance testing.

Create a Ceph performance test request

TechPreview

This section was moved to Mirantis OpenStack for Kubernetes documentation: Ceph operations - Create a Ceph performance test request.

KaaSCephOperationRequest CR perftest specification

TechPreview

This section was moved to Mirantis OpenStack for Kubernetes documentation: Ceph operations - KaaSCephOperationRequest CR perftest specification.

KaaSCephOperationRequest perftest status

TechPreview

This section was moved to Mirantis OpenStack for Kubernetes documentation: Ceph operations - KaaSCephOperationRequest perftest status.

Delete a managed cluster

This section was moved to MOSK documentation: General operations - Delete a managed cluster.

Day-2 operations

TechPreview since 2.26.0 (17.1.0 and 16.1.0)

The subsections of this section were moved to MOSK documentation: Host operating system configuration - Day-2 operations.

Day-2 operations workflow

TechPreview since 2.26.0 (17.1.0 and 16.1.0)

This section was moved to MOSK documentation: Day-2 operations - Day-2 operations workflow.

Global recommendations for implementation of custom modules

This section was moved to MOSK documentation: Day-2 operations - Global recommendations for implementation of custom modules.

Format and structure of a module package

TechPreview since 2.26.0 (17.1.0 and 16.1.0)

This section was moved to MOSK documentation: Day-2 operations - Format and structure of a module package.

Modules provided by Container Cloud

TechPreview since 2.27.0 (17.2.0 and 16.2.0)

The subsections of this section were moved to host-os-modules documentation.

irqbalance module

TechPreview since 2.27.0 (17.2.0 and 16.2.0)

This section was moved to host-os-modules documentation: irqbalance module.

package module

TechPreview since 2.27.0 (17.2.0 and 16.2.0)

This section was moved to host-os-modules documentation: package module.

sysctl module

TechPreview since 2.27.0 (17.2.0 and 16.2.0)

This section was moved to host-os-modules documentation: sysctl module.

HostOSConfiguration and HostOSConfigurationModules concepts

TechPreview since 2.26.0 (17.1.0 and 16.1.0)

This section was moved to MOSK documentation: Day-2 operations - HostOSConfiguration and HostOSConfigurationModules concepts.

Internal API for day-2 operations

TechPreview since 2.26.0 (17.1.0 and 16.1.0)

This section was moved to MOSK documentation: Day-2 operations - Internal API for day-2 operations.

Add a custom module to a Container Cloud deployment

TechPreview since 2.26.0 (17.1.0 and 16.1.0)

This section was moved to MOSK documentation: Day-2 operations - Add a custom module to a Container Cloud deployment.

Test a custom or Container Cloud module after creation

TechPreview since 2.26.0 (17.1.0 and 16.1.0)

This section was moved to MOSK documentation: Day-2 operations - Test a custom or Container Cloud module after creation.

Retrigger a module configuration

This section was moved to MOSK documentation: Day-2 operations - Retrigger a module configuration.

Troubleshooting

This section was moved to MOSK documentation: Day-2 operations - Troubleshooting.

Add or update a CA certificate for a MITM proxy using API

This section was moved to MOSK documentation: Underlay Kubernetes operations - Add or update a CA certificate for a MITM proxy using API.

Add a custom OIDC provider for MKE

Available since 17.0.0, 16.0.0, and 14.1.0

By default, MKE uses Keycloak as the OIDC provider. Using the ClusterOIDCConfiguration custom resource, you can add your own OpenID Connect (OIDC) provider for MKE on managed clusters to authenticate user requests to Kubernetes. For OIDC provider requirements, see OIDC official specification.

Note

For OpenStack and StackLight, Container Cloud supports only Keycloak, which is configured on the management cluster, as the OIDC provider.

To add a custom OIDC provider for MKE:

  1. Configure the OIDC provider:

    1. Log in to the OIDC provider dashboard.

    2. Create an OIDC client. If you are going to use an existing one, skip this step.

    3. Add the MKE redirectURL of the managed cluster to the OIDC client. By default, the URL format is https://<MKE IP>:6443/login.

    4. Add the <Container Cloud web UI IP>/token to the OIDC client for generation of kubeconfig files of the target managed cluster through the Container Cloud web UI.

    5. Ensure that the aud claim of the issued id_token for audience will be equal to the created client ID.

    6. Optional. Allow MKE to refresh authentication when id_token expires by allowing the offline_access claim for the OIDC client.

  2. Create the ClusterOIDCConfiguration object in the YAML format containing the OIDC client settings. For details, see API Reference: ClusterOIDCConfiguration resource for MKE.

    Warning

    The kubectl apply command automatically saves the applied data as plain text into the kubectl.kubernetes.io/last-applied-configuration annotation of the corresponding object. This may result in revealing sensitive data in this annotation when creating or modifying the object.

    Therefore, do not use kubectl apply on this object. Use kubectl create, kubectl patch, or kubectl edit instead.

    If you used kubectl apply on this object, you can remove the kubectl.kubernetes.io/last-applied-configuration annotation from the object using kubectl edit.

    The ClusterOIDCConfiguration object is created in the management cluster. Users with the m:kaas:ns@operator/writer/member roles have access to this object.

    Once done, the following dependent objects are created automatically in the target managed cluster: the rbac.authorization.k8s.io/v1/ClusterRoleBinding object that binds the admin group defined in spec:adminRoleCriteria:value to the cluster-admin rbac.authorization.k8s.io/v1/ClusterRole object.

  3. In the Cluster object of the managed cluster, add the name of the ClusterOIDCConfiguration object to the spec.providerSpec.value.oidc field.

  4. Wait until the cluster machines switch from the Reconfigure to Ready state for the changes to apply.

Change a cluster configuration

This section was moved to MOSK documentation: General Operations - Change a cluster configuration.

Disable a machine

TechPreview since 2.25.0 (17.0.0 and 16.0.0) for workers on managed clusters

This section was moved to MOSK documentation: General Operations - Disable a machine.

Configure the parallel update of worker nodes

Available since 17.0.0, 16.0.0, and 14.1.0 as GA Available since 14.0.1(0) and 15.0.1 as TechPreview

This section was moved to MOSK documentation: Cluster update - Configure the parallel update of worker nodes.

Create update groups for worker machines

Available since 2.27.0 (17.2.0 and 16.2.0)

This section was moved to MOSK documentation: Cluster update - Create update groups for worker machines.

Change the upgrade order of a machine or machine pool

This section was moved to MOSK documentation: Cluster update - Change the upgrade order of a machine or machine pool.

Update a managed cluster

The subsections of this section were moved to MOSK documentation: Cluster update.

Verify the Container Cloud status before managed cluster update

This section was moved to MOSK documentation: Cluster update - Verify the management cluster status before MOSK update.

Update a managed cluster using the Container Cloud web UI

This section was moved to MOSK documentation: Cluster update - Update to a major version.

Granularly update a managed cluster using the ClusterUpdatePlan object

Available since 2.27.0 (17.2.0 and 16.2.0) TechPreview

This section was moved to MOSK documentation: Cluster update - Granularly update a managed cluster using the ClusterUpdatePlan object.

Update a patch Cluster release of a managed cluster

Available since 2.23.2

This section was moved to MOSK documentation: Cluster update - Update a patch Cluster release of a managed cluster.

Add a Container Cloud cluster to Lens

This section was moved to Mirantis OpenStack for Kubernetes documentation: Getting access - Add a Container Cloud cluster to Lens.

Connect to the Mirantis Kubernetes Engine web UI

This section was moved to Mirantis OpenStack for Kubernetes documentation: Getting access.

Connect to a Mirantis Container Cloud cluster

This section was moved to Mirantis OpenStack for Kubernetes documentation: Getting access - Connect to a MOSK cluster.

Inspect the history of a cluster and machine deployment or update

Available since 2.22.0

This section was moved to MOSK Troubleshooting Guide: Inspect the history of a cluster and machine deployment or update.

Operate management clusters

The subsections of this section were moved to MOSK documentation: Management cluster operations.

Workflow and configuration of management cluster upgrade

This section was moved to MOSK documentation: Management cluster operations - Workflow and configuration of management cluster upgrade.

Schedule Mirantis Container Cloud updates

This section was moved to MOSK documentation: Management cluster operations - Schedule Mirantis Container Cloud updates.

Renew the Container Cloud and MKE licenses

This section was moved to MOSK documentation: Management cluster operations - Renew the Container Cloud and MKE licenses.

Configure NTP server

This section was moved to MOSK documentation: Management cluster operations - Configure NTP server.

Automatically propagate Salesforce configuration to all clusters

This section was moved to MOSK documentation: Management cluster operations - Automatically propagate Salesforce configuration to all clusters.

Update the Keycloak IP address on bare metal clusters

This section was moved to MOSK documentation: Management cluster operations - Update the Keycloak IP address on bare metal clusters.

Configure host names for cluster machines

TechPreview Available since 2.24.0

This section was moved to MOSK documentation: Management cluster operations - Configure host names for cluster machines.

Back up MariaDB on a management cluster

The subsections of this section were moved to MOSK documentation: Management cluster operations - Back up MariaDB on a management cluster.

Configure periodic backups of MariaDB

This section was moved to MOSK documentation: Management cluster operations - Configure periodic backups of MariaDB.

Verify operability of the MariaDB backup jobs

This section was moved to MOSK documentation: Management cluster operations - Verify operability of the MariaDB backup jobs.

Restore MariaDB databases

This section was moved to MOSK documentation: Management cluster operations - Restore MariaDB databases.

Change the storage node for MariaDB on bare metal clusters

This section was moved to MOSK documentation: Management cluster operations - Change the storage node for MariaDB.

Remove a management cluster

This section was moved to MOSK documentation: Management cluster operations - Remove a management cluster.

Warm up the Container Cloud cache

TechPreview Available since 2.24.0 and 23.2 for MOSK clusters

This section was moved to MOSK documentation: Management cluster operations - Warm up the Container Cloud cache.

Self-diagnostics for management and managed clusters

Available since 2.28.0 (17.3.0 and 16.3.0)

The subsections of this section were moved to MOSK Operations Guide: Bare metal operations - Run cluster self-diagnostics.

Trigger self-diagnostics for a management or managed cluster

Available since 2.28.0 (17.3.0 and 16.3.0)

This section was moved to MOSK Operations Guide: Bare metal operations - Trigger self-diagnostics for a management or managed cluster.

Self-upgrades of the Diagnostic Controller

Available since 2.28.0 (17.3.0 and 16.3.0)

This section was moved to MOSK Operations Guide: Bare metal operations - Self-upgrades of the Diagnostic Controller.

Diagnostic checks for the bare metal provider

Available since 2.28.0 (17.3.0 and 16.3.0) Technology Preview

This section was moved to MOSK Operations Guide: Bare metal operations - Diagnostic checks for the bare metal provider.

Increase memory limits for cluster components

This section was moved to MOSK documentation: Underlay Kubernetes operations - Increase memory limits for cluster components.

Set the MTU size for Calico

TechPreview Available since 2.24.0 and 2.24.2 for MOSK 23.2

This section was moved to MOSK documentation: Underlay Kubernetes operations - Set the MTU size for Calico.

Increase storage quota for etcd

Available since Cluster releases 15.0.3 and 14.0.3

This section was moved to MOSK documentation: Underlay Kubernetes operations - Increase storage quota for etcd.

Configure Kubernetes auditing and profiling

Available since 2.24.3 (Cluster releases 15.0.2 and 14.0.2)

This section was moved to MOSK documentation: Underlay Kubernetes operations - Configure Kubernetes auditing and profiling.

Configure TLS certificates for cluster applications

Technology Preview

This section was moved to MOSK documentation: Underlay Kubernetes operations - Configure TLS certificates for cluster applications.

Define a custom CA certificate for a private Docker registry

This section was moved to MOSK documentation: Underlay Kubernetes operations - Define a custom CA certificate for a private Docker registry.

Enable cluster and machine maintenance mode

The subsections of this section were moved to MOSK documentation: General Operations - Enable cluster and machine maintenance mode.

Enable maintenance mode on a cluster and machine using web UI

This section was moved to MOSK documentation: General Operations - Enable maintenance mode on a cluster and machine using web UI.

Enable maintenance mode on a cluster and machine using CLI

This section was moved to MOSK documentation: General Operations - Enable maintenance mode on a cluster and machine using CLI.

Perform a graceful reboot of a cluster

Available since 2.23.0

This section was moved to MOSK documentation: General Operations - Perform a graceful reboot of a cluster.

Delete a cluster machine

The subsections of this section were moved to MOSK documentation: Delete a cluster machine.

Precautions for a cluster machine deletion

This section was moved to MOSK documentation: Precautions for a cluster machine deletion.

Delete a cluster machine using web UI

This section was moved to MOSK documentation: Delete a cluster machine using web UI.

Delete a cluster machine using CLI

This section was moved to MOSK documentation: Delete a cluster machine using CLI.

Manage IAM

The subsections of this section were moved to MOSK documentation: IAM operations.

Manage user roles through Container Cloud API

The subsections of this section were moved to MOSK documentation: IAM operations - Manage user roles through Container Cloud API.

Manage user roles through the Container Cloud web UI

This section was moved to MOSK documentation: IAM operations - Manage user roles through the Container Cloud web UI.

Manage user roles through Keycloak

The subsections of this section were moved to MOSK documentation: IAM operations - Manage user roles through Keycloak.

Container Cloud roles and scopes

This section was moved to MOSK documentation: IAM operations - Container Cloud roles and scopes.

Use cases

This section was moved to MOSK documentation: Manage user roles through Keycloak - Use cases.

Access the Keycloak Admin Console

This section was moved to Mirantis OpenStack for Kubernetes documentation: Getting access - Access the Keycloak Admin Console.

Change passwords for IAM users

This section was moved to MOSK documentation: IAM operations - Change passwords for IAM users.

Obtain MariaDB credentials for IAM

Available since Container Cloud 2.22.0

This section was moved to MOSK documentation: IAM operations - Obtain MariaDB credentials for IAM.

Manage Keycloak truststore using the Container Cloud web UI

Available since 2.26.0 (17.1.0 and 16.1.0)

This section was moved to MOSK documentation: IAM operations - Manage Keycloak truststore using the Container Cloud web UI.

Manage StackLight

The subsections of this section were moved to MOSK Operations Guide: StackLight operations.

Access StackLight web UIs

This section was moved to Mirantis OpenStack for Kubernetes documentation: Getting access.

StackLight logging indices

Available since 2.26.0 (17.1.0 and 16.1.0)

This section was moved to MOSK Reference Architecture: StackLight logging indices.

OpenSearch Dashboards

The subsections of this section were moved to MOSK Operations Guide: OpenSearch Dashboards.

View OpenSearch Dashboards

This section was moved to MOSK Operations Guide: View OpenSearch Dashboards.

Search in OpenSearch Dashboards

This section was moved to MOSK Operations Guide: Search in OpenSearch Dashboards.

Export logs from OpenSearch Dashboards to CSV

Available since 2.23.0 (12.7.0 and 11.7.0)

This section was moved to MOSK Operations Guide: Export logs from OpenSearch Dashboards to CSV.

Tune OpenSearch performance for the bare metal provider

This section was moved to MOSK Operations Guide: Tune OpenSearch performance.

View Grafana dashboards

This section was moved to MOSK Operations Guide: View Grafana dashboards.

Export data from Table panels of Grafana dashboards to CSV

This section was moved to MOSK Operations Guide: Export data from Table panels of Grafana dashboards to CSV.

Available StackLight alerts

The subsections of this section were moved to MOSK Operations Guide: StackLight alerts.

Alert dependencies

This section was moved to MOSK Operations Guide: Alert dependencies.

Alertmanager

This section was moved to MOSK Operations Guide: StackLight alerts - Alertmanager.

Bond interface

Available since 2.24.0 and 2.24.2 for MOSK 23.2

This section was moved to MOSK Operations Guide: Bare metal alerts - Bond interface.

cAdvisor

This section was moved to MOSK Operations Guide: StackLight alerts - cAdvisor.

Calico

This section was moved to MOSK Operations Guide: Generic alerts - Calico.

Ceph

This section was moved to MOSK Operations Guide: StackLight alerts - Ceph.

Docker Swarm

This section was moved to MOSK Operations Guide: StackLight alerts - Mirantis Kubernetes Engine.

Elasticsearch Exporter

This section was moved to MOSK Operations Guide: StackLight alerts - Elasticsearch Exporter.

Etcd

This section was moved to MOSK Operations Guide: Generic alerts - Etcd.

External endpoint

This section was moved to MOSK Operations Guide: StackLight alerts - Monitoring of external endpoints.

Fluentd

This section was moved to MOSK Operations Guide: StackLight alerts - Fluentd.

General alerts

This section was moved to MOSK Operations Guide: General StackLight alerts.

General node alerts

This section was moved to MOSK Operations Guide: StackLight alerts - Node.

Grafana

This section was moved to MOSK Operations Guide: StackLight alerts - Grafana.

Helm Controller

This section was moved to MOSK Operations Guide: Container Cloud alerts - Helm Controller.

Host Operating System Modules Controller

TechPreview since 2.28.0 (17.3.0 and 16.3.0)

This section was moved to MOSK Operations Guide: Bare metal alerts - Host Operating System Modules Controller.

Ironic

This section was moved to MOSK Operations Guide: Bare metal alerts - Ironic.

Kernel

This section was moved to MOSK Operations Guide: Bare metal alerts - Kernel.

Kubernetes applications

This section was moved to MOSK Operations Guide: Kubernetes alerts - Kubernetes applications.

Kubernetes resources

This section was moved to MOSK Operations Guide: Kubernetes alerts - Kubernetes resources.

Kubernetes storage

This section was moved to MOSK Operations Guide: Kubernetes alerts - Kubernetes storage.

Kubernetes system

This section was moved to MOSK Operations Guide: Kubernetes alerts - Kubernetes system.

Mirantis Container Cloud

This section was moved to MOSK Operations Guide: StackLight alerts - Mirantis Container Cloud.

Mirantis Container Cloud cache

This section was moved to MOSK Operations Guide: StackLight alerts - Mirantis Container Cloud cache.

Mirantis Container Cloud controllers

Available since Cluster releases 12.7.0 and 11.7.0

This section was moved to MOSK Operations Guide: StackLight alerts - Mirantis Container Cloud controllers.

Mirantis Container Cloud providers

Available since Cluster releases 12.7.0 and 11.7.0

This section was moved to MOSK Operations Guide: StackLight alerts - Mirantis Container Cloud providers.

Mirantis Kubernetes Engine

This section was moved to MOSK Operations Guide: StackLight alerts - Mirantis Kubernetes Engine.

Node network

This section was moved to MOSK Operations Guide: StackLight alerts - Node network.

Node time

This section was moved to MOSK Operations Guide: StackLight alerts - Node time.

OpenSearch

This section was moved to MOSK Operations Guide: StackLight alerts - OpenSearch.

PostgreSQL

This section was moved to MOSK Operations Guide: StackLight alerts - PostgreSQL.

Prometheus

This section was moved to MOSK Operations Guide: StackLight alerts - Prometheus.

Prometheus MS Teams

This section was moved to MOSK Operations Guide: StackLight alerts - Prometheus MS Teams.

Prometheus Relay

This section was moved to MOSK Operations Guide: StackLight alerts - Prometheus Relay.

Release Controller

This section was moved to MOSK Operations Guide: StackLight alerts - Release Controller.

ServiceNow

This section was moved to MOSK Operations Guide: StackLight alerts - ServiceNow.

Salesforce notifier

This section was moved to MOSK Operations Guide: StackLight alerts - Salesforce notifier.

SSL certificates

This section was moved to MOSK Operations Guide - StackLight alerts: Monitoring of external endpoints and Container Cloud SSL.

Telegraf

This section was moved to MOSK Operations Guide: StackLight alerts - Telegraf.

Telemeter

This section was moved to MOSK Operations Guide: StackLight alerts - Telemeter.

Troubleshoot alerts

The subsections of this section were moved to MOSK Troubleshooting Guide: Troubleshoot StackLight - Troubleshoot alerts.

Troubleshoot cAdvisor alerts

This section was moved to MOSK Troubleshooting Guide: Troubleshoot StackLight - Troubleshoot cAdvisor alerts.

Troubleshoot Helm Controller alerts

This section was moved to MOSK Troubleshooting Guide: Troubleshoot StackLight - Troubleshoot Helm Controller alerts.

Troubleshoot Host Operating System Modules Controller alerts

This section was moved to MOSK Troubleshooting Guide: Troubleshoot StackLight - Troubleshoot Host Operating System Modules Controller alerts.

Troubleshoot Ubuntu kernel alerts

This section was moved to MOSK Troubleshooting Guide: Troubleshoot StackLight - Troubleshoot Ubuntu kernel alerts.

Troubleshoot Kubernetes applications alerts

This section was moved to MOSK Troubleshooting Guide: Troubleshoot StackLight - Troubleshoot Kubernetes applications alerts.

Troubleshoot Kubernetes resources alerts

This section was moved to MOSK Troubleshooting Guide: Troubleshoot StackLight - Troubleshoot Kubernetes resources alerts.

Troubleshoot Kubernetes storage alerts

This section was moved to MOSK Troubleshooting Guide: Troubleshoot StackLight - Troubleshoot Kubernetes storage alerts.

Troubleshoot Kubernetes system alerts

This section was moved to MOSK Troubleshooting Guide: Troubleshoot StackLight - Troubleshoot Kubernetes system alerts.

Troubleshoot Mirantis Container Cloud Exporter alerts

This section was moved to MOSK Troubleshooting Guide: Troubleshoot StackLight - Troubleshoot Mirantis Container Cloud Exporter alerts.

Troubleshoot Mirantis Kubernetes Engine alerts

This section was moved to MOSK Troubleshooting Guide: Troubleshoot StackLight - Troubleshoot Mirantis Kubernetes Engine alerts.

Troubleshoot OpenSearch alerts

Available since 2.26.0 (17.1.0 and 16.1.0)

This section was moved to MOSK Troubleshooting Guide: Troubleshoot StackLight - Troubleshoot OpenSearch alerts.

Troubleshoot Release Controller alerts

This section was moved to MOSK Troubleshooting Guide: Troubleshoot StackLight - Troubleshoot Release Controller alerts.

Troubleshoot Telemeter client alerts

This section was moved to MOSK Troubleshooting Guide: Troubleshoot StackLight - Troubleshoot Telemeter client alerts.

Silence alerts

This section was moved to MOSK documentation: Silence alerts.

StackLight rules for Kubernetes network policies

Available since Cluster releases 17.0.1 and 16.0.1

This section was moved to MOSK documentation: StackLight rules for Kubernetes network policies.

Configure StackLight

The subsections of this section were moved to MOSK Operations Guide: Configure StackLight.

StackLight configuration procedure

Thhis section was moved to MOSK Operations Guide: Configure StackLight - StackLight configuration procedure.

StackLight configuration parameters

Thhis section was moved to MOSK Operations Guide: Configure StackLight - StackLight configuration parameters.

Verify StackLight after configuration

Thhis section was moved to MOSK Operations Guide: Configure StackLight - Verify StackLight after configuration.

Tune StackLight for long-term log retention

Available since 2.24.0 and 2.24.2 for MOSK 23.2

This section was moved to MOSK Operations Guide: StackLight operations - Tune StackLight for long-term log retention.

Enable log forwarding to external destinations

Available since 2.23.0 and 2.23.1 for MOSK 23.1

This section was moved to MOSK Operations Guide: StackLight operations - Enable log forwarding to external destinations.

Enable remote logging to syslog

Deprecated since 2.23.0

This section was moved to MOSK Operations Guide: StackLight operations - Enable remote logging to syslog.

Create logs-based metrics

This section was moved to MOSK Operations Guide: StackLight operations - Create logs-based metrics.

Enable generic metric scraping

This section was moved to MOSK Operations Guide: StackLight operations - Enable generic metric scraping.

Manage metrics filtering

Available since 2.24.0 and 2.24.2 for MOSK 23.2

This section was moved to MOSK Operations Guide: StackLight operations - Manage metrics filtering.

Use S.M.A.R.T. metrics for creating alert rules on bare metal clusters

Available since 2.27.0 (Cluster releases 17.2.0 and 16.2.0)

This section was moved to MOSK Operations Guide: StackLight operations - Use S.M.A.R.T. metrics for creating alert rules.

Deschedule StackLight Pods from a worker machine

This section was moved to MOSK Operations Guide: Deschedule StackLight Pods from a worker machine.

Calculate the storage retention time

Obsolete since 2.26.0 (17.1.0, 16.1.0) for OpenSearch Available since 2.22.0 and 2.23.1 (12.7.0, 11.6.0)

This section was moved to MOSK documentation: Calculate the storage retention time.

Troubleshooting

This section was moved to MOSK documentation: Troubleshooting Guide.

Collect cluster logs

This section was moved to MOSK documentation: Collect cluster logs.

Cluster deletion or detachment freezes

This section was moved to MOSK documentation: Cluster deletion or detachment freezes.

Keycloak admin console becomes inaccessible after changing the theme

This section was moved to MOSK documentation: Keycloak admin console becomes inaccessible after changing the theme.

The ‘database space exceeded’ error on large clusters

This section was moved to MOSK documentation: The ‘database space exceeded’ error on large clusters.

The auditd events cause ‘backlog limit exceeded’ messages

This section was moved to MOSK documentation: The auditd events cause ‘backlog limit exceeded’ messages.

Troubleshoot baremetal-based clusters

This section was moved to MOSK documentation: Troubleshoot bare metal.

Log in to the IPA virtual console for hardware troubleshooting

This section was moved to MOSK documentation: Log in to the IPA virtual console for hardware troubleshooting.

Bare metal hosts in ‘provisioned registration error’ state after update

This section was moved to MOSK documentation: Bare metal hosts in provisioned registration error state after update.

Troubleshoot an operating system upgrade with host restart

This section was moved to MOSK documentation: Troubleshoot an operating system upgrade with host restart.

Troubleshoot iPXE boot issues

This section was moved to MOSK documentation: Troubleshoot iPXE boot issues.

Provisioning failure due to device naming issues in a bare metal host profile

This section was moved to MOSK documentation: Provisioning failure due to device naming issues in a bare metal host profile.

Troubleshoot Ceph

This section was moved to MOSK documentation: Troubleshoot Ceph.

Ceph disaster recovery

This section was moved to MOSK documentation: Ceph disaster recovery.

Ceph Monitors recovery

This section was moved to MOSK documentation: Ceph Monitors recovery.

Remove Ceph OSD manually

This section was moved to MOSK documentation: Remove Ceph OSD manually.

KaaSCephOperationRequest failure with a timeout during rebalance

This section was moved to MOSK documentation: KaaSCephOperationRequest failure with a timeout during rebalance.

Ceph Monitors store.db size rapidly growing

This section was moved to MOSK documentation: Ceph Monitors store.db size rapidly growing.

Replaced Ceph OSD fails to start on authorization

This section was moved to MOSK documentation: Replaced Ceph OSD fails to start on authorization.

The ceph-exporter pods are present in the Ceph crash list

This section was moved to MOSK documentation: The ceph-exporter pods are present in the Ceph crash list.

Troubleshoot StackLight

This section was moved to MOSK documentation: Troubleshoot StackLight.

Patroni replication lag

This section was moved to MOSK documentation: Patroni replication lag.

Alertmanager does not send resolve notifications for custom alerts

This section was moved to MOSK documentation: Alertmanager does not send resolve notifications for custom alerts.

OpenSearchPVCMismatch alert raises due to the OpenSearch PVC size mismatch

This section was moved to MOSK documentation: OpenSearchPVCMismatch alert raises due to the OpenSearch PVC size mismatch.

OpenSearch cluster deadlock due to the corrupted index

This section was moved to MOSK documentation: OpenSearch cluster deadlock due to the corrupted index.

Failure of shard relocation in the OpenSearch cluster

This section was moved to MOSK documentation: Failure of shard relocation in the OpenSearch cluster.

StackLight pods get stuck with the ‘NodeAffinity failed’ error

This section was moved to MOSK documentation: StackLight pods get stuck with the NodeAffinity failed error.

No logs are forwarded to Splunk

This section was moved to MOSK documentation: No logs are forwarded to Splunk.

Security Guide

This guide was moved to MOSK documentation: Security Guide.

Firewall configuration

This section was moved to MOSK documentation: Firewall configuration.

Container Cloud

This section was moved to MOSK documentation: Firewall configuration - Container Cloud.

Mirantis Kubernetes Engine

For available Mirantis Kubernetes Engine (MKE) ports, refer to MKE Documentation: Open ports to incoming traffic.

StackLight

This section was moved to MOSK documentation: Firewall configuration - StackLight.

Ceph

This section was moved to MOSK documentation: Firewall configuration - Ceph.

Container images signing and validation

Available since 2.26.0 (17.1.0 and 16.1.0) Technology Preview

This section was moved to MOSK documentation: Container images signing and validation.

API Reference

Warning

This section is intended only for advanced Infrastructure Operators who are familiar with Kubernetes Cluster API.

Mirantis currently supports only those Mirantis Container Cloud API features that are implemented in the Container Cloud web UI. Use other Container Cloud API features for testing and evaluation purposes only.

The Container Cloud APIs are implemented using the Kubernetes CustomResourceDefinitions (CRDs) that enable you to expand the Kubernetes API. Different types of resources are grouped in the dedicated files, such as cluster.yaml or machines.yaml.

For testing and evaluation purposes, you may also use the experimental public Container Cloud API that allows for implementation of custom clients for creating and operating of managed clusters. This repository contains branches that correspond to the Container Cloud releases. For an example usage, refer to the README file of the repository.

Public key resources

This section describes the PublicKey resource used in Mirantis Container Cloud API to provide SSH access to every machine of a cluster.

The Container Cloud PublicKey CR contains the following fields:

  • apiVersion

    API version of the object that is kaas.mirantis.com/v1alpha1

  • kind

    Object type that is PublicKey

  • metadata

    The metadata object field of the PublicKey resource contains the following fields:

    • name

      Name of the public key

    • namespace

      Project where the public key is created

  • spec

    The spec object field of the PublicKey resource contains the publicKey field that is an SSH public key value.

The PublicKey resource example:

apiVersion: kaas.mirantis.com/v1alpha1
kind: PublicKey
metadata:
  name: demokey
  namespace: test
spec:
  publicKey: |
    ssh-rsa AAAAB3NzaC1yc2EAAAA…

License resource

This section describes the License custom resource (CR) used in Mirantis Container Cloud API to maintain the Mirantis Container Cloud license data.

Warning

The kubectl apply command automatically saves the applied data as plain text into the kubectl.kubernetes.io/last-applied-configuration annotation of the corresponding object. This may result in revealing sensitive data in this annotation when creating or modifying the object.

Therefore, do not use kubectl apply on this object. Use kubectl create, kubectl patch, or kubectl edit instead.

If you used kubectl apply on this object, you can remove the kubectl.kubernetes.io/last-applied-configuration annotation from the object using kubectl edit.

The Container Cloud License CR contains the following fields:

  • apiVersion

    The API version of the object that is kaas.mirantis.com/v1alpha1.

  • kind

    The object type that is License.

  • metadata

    The metadata object field of the License resource contains the following fields:

    • name

      The name of the License object, must be license.

  • spec

    The spec object field of the License resource contains the Secret reference where license data is stored.

    • license

      • secret

        The Secret reference where the license data is stored.

        • key

          The name of a key in the license Secret data field under which the license data is stored.

        • name

          The name of the Secret where the license data is stored.

      • value

        The value of the updated license. If you need to update the license, place it under this field. The new license data will be placed to the Secret and value will be cleaned.

  • status
    • customerID

      The unique ID of a customer generated during the license issuance.

    • instance

      The unique ID of the current Mirantis Container Cloud instance.

    • dev

      The license is for development.

    • openstack

      The license limits for MOSK clusters:

      • clusters

        The maximum number of MOSK clusters to be deployed. If the field is absent, the number of deployments is unlimited.

      • workersPerCluster

        The maximum number of workers per MOSK cluster to be created. If the field is absent, the number of workers is unlimited.

    • expirationTime

      The license expiration time in the ISO 8601 format.

    • expired

      The license expiration state. If the value is true, the license has expired. If the field is absent, the license is valid.

Configuration example of the status fields:

status:
 customerID: "auth0|5dd501e54138450d337bc356"
 instance: 7589b5c3-57c5-4e64-96a0-30467189ae2b
 dev: true
 limits:
   clusters: 3
   workersPerCluster: 5
 expirationTime: 2028-11-28T23:00:00Z

Diagnostic resource

Available since 2.28.0 (17.3.0 and 16.3.0)

This section describes the Diagnostic custom resource (CR) used in Mirantis Container Cloud API to trigger self-diagnostics for management or managed clusters.

The Container Cloud Diagnostic CR contains the following fields:

  • apiVersion

    API version of the object that is diagnostic.mirantis.com/v1alpha1.

  • kind

    Object type that is Diagnostic.

  • metadata

    Object metadata that contains the following fields:

    • name

      Name of the Diagnostic object.

    • namespace

      Namespace used to create the Diagnostic object. Must be equal to the namespace of the target cluster.

  • spec

    Resource specification that contains the following fields:

    • cluster

      Name of the target cluster to run diagnostics on.

    • checks

      Reserved for internal usage, any override will be discarded.

  • status
    • finishedAt

      Completion timestamp of diagnostics. If the Diagnostic Controller version is outdated, this field is not set and the corresponding error message is displayed in the error field.

    • error

      Error that occurs during diagnostics or if the Diagnostic Controller version is outdated. Omitted if empty.

    • controllerVersion

      Version of the controller that launched diagnostics.

    • result

      Map of check statuses where the key is the check name and the value is the result of the corresponding diagnostic check:

      • description

        Description of the check in plain text.

      • result

        Result of diagnostics. Possible values are PASS, ERROR, FAIL, WARNING, INFO.

      • message

        Optional. Explanation of the check results. It may optionally contain a reference to the documentation describing a known issue related to the check results, including the existing workaround for the issue.

      • success

        Success status of the check. Boolean.

      • ticketInfo

        Optional. Information about the ticket to track the resolution progress of the known issue related to the check results. For example, FIELD-12345.

The Diagnostic resource example:

apiVersion: diagnostic.mirantis.com/v1alpha1
kind: Diagnostic
metadata:
  name: test-diagnostic
  namespace: test-namespace
spec:
  cluster: test-cluster
status:
  finishedAt: 2024-07-01T11:27:14Z
  error: ""
  controllerVersion: v1.40.11
  result:
    bm_address_capacity:
      description: Baremetal addresses capacity
      message: LCM Subnet 'default/k8s-lcm-nics' has 8 allocatable addresses (threshold
        is 5) - OK; PXE-NIC Subnet 'default/k8s-pxe-nics' has 7 allocatable addresses
        (threshold is 5) - OK; Auto-assignable address pool 'default' from MetallbConfig
        'default/kaas-mgmt-metallb' has left 21 available IP addresses (threshold
        is 10) - OK
      result: INFO
      success: true
    bm_artifacts_overrides:
      description: Baremetal overrides check
      message: BM operator has no undesired overrides
      result: PASS
      success: true

IAM resources

This section contains descriptions and examples of the IAM resources for Mirantis Container Cloud. For management details, see Manage user roles through Container Cloud API.


IAMUser

IAMUser is the Cluster (non-namespaced) object. Its objects are synced from Keycloak that is they are created upon user creation in Keycloak and deleted user upon deletion in Keycloak. The IAMUser is exposed as read-only to all users. It contains the following fields:

  • apiVersion

    API version of the object that is iam.mirantis.com/v1alpha1

  • kind

    Object type that is IAMUser

  • metadata

    Object metadata that contains the following field:

    • name

      Sanitized user name without special characters with first 8 symbols of the user UUID appended to the end

  • displayName

    Name of the user as defined in the Keycloak database

  • externalID

    ID of the user as defined in the Keycloak database

Configuration example:

apiVersion: iam.mirantis.com/v1alpha1
kind: IAMUser
metadata:
  name: userone-f150d839
displayName: userone
externalID: f150d839-d03a-47c4-8a15-4886b7349791
IAMRole

IAMRole is the read-only cluster-level object that can have global, namespace, or cluster scope. It contains the following fields:

  • apiVersion

    API version of the object that is iam.mirantis.com/v1alpha1.

  • kind

    Object type that is IAMRole.

  • metadata

    Object metadata that contains the following field:

    • name

      Role name. Possible values are: global-admin, cluster-admin, operator, bm-pool-operator, user, member, stacklight-admin, management-admin.

      For details on user role assignment, see Manage user roles through Container Cloud API.

      Note

      The management-admin role is available since Container Cloud 2.25.0 (Cluster releases 17.0.0, 16.0.0, 14.1.0).

  • description

    Role description.

  • scope

    Role scope.

Configuration example:

apiVersion: iam.mirantis.com/v1alpha1
kind: IAMRole
metadata:
  name: global-admin
description: Gives permission to manage IAM role bindings in the Container Cloud deployment.
scope: global
IAMGlobalRoleBinding

IAMGlobalRoleBinding is the Cluster (non-namespaced) object that should be used for global role bindings in all namespaces. This object is accessible to users with the global-admin IAMRole assigned through the IAMGlobalRoleBinding object. The object contains the following fields:

  • apiVersion

    API version of the object that is iam.mirantis.com/v1alpha1.

  • kind

    Object type that is IAMGlobalRoleBinding.

  • metadata

    Object metadata that contains the following field:

    • name

      Role binding name. If the role binding is user-created, user can set any unique name. If a name relates to a binding that is synced by user-controller from Keycloak, the naming convention is <username>-<rolename>.

  • role

    Object role that contains the following field:

    • name

      Role name.

  • user

    Object name that contains the following field:

    • name

      Name of the iamuser object that the defined role is provided to. Not equal to the user name in Keycloak.

  • legacy

    Defines whether the role binding is legacy. Possible values are true or false.

  • legacyRole

    Applicable when the legacy field value is true. Defines the legacy role name in Keycloak.

  • external

    Defines whether the role is assigned through Keycloak and is synced by user-controller with the Container Cloud API as the IAMGlobalRoleBinding object. Possible values are true or false.

Caution

If you create the IAM*RoleBinding, do not set or modify the legacy, legacyRole, and external fields unless absolutely necessary and you understand all implications.

Configuration example:

apiVersion: iam.mirantis.com/v1alpha1
kind: IAMGlobalRoleBinding
metadata:
  name: userone-global-admin
role:
  name: global-admin
user:
  name: userone-f150d839
external: false
legacy: false
legacyRole: “”
IAMRoleBinding

IAMRoleBinding is the namespaced object that represents a grant of one role to one user in all clusters of the namespace. It is accessible to users that have either of the following bindings assigned to them:

  • IAMGlobalRoleBinding that binds them with the global-admin, operator, or user iamRole. For user, the bindings are read-only.

  • IAMRoleBinding that binds them with the operator or user iamRole in a particular namespace. For user, the bindings are read-only.

  • apiVersion

    API version of the object that is iam.mirantis.com/v1alpha1.

  • kind

    Object type that is IAMRoleBinding.

  • metadata

    Object metadata that contains the following fields:

    • namespace

      Namespace that the defined binding belongs to.

    • name

      Role binding name. If the role is user-created, user can set any unique name. If a name relates to a binding that is synced from Keycloak, the naming convention is <userName>-<roleName>.

  • legacy

    Defines whether the role binding is legacy. Possible values are true or false.

  • legacyRole

    Applicable when the legacy field value is true. Defines the legacy role name in Keycloak.

  • external

    Defines whether the role is assigned through Keycloak and is synced by user-controller with the Container Cloud API as the IAMGlobalRoleBinding object. Possible values are true or false.

Caution

If you create the IAM*RoleBinding, do not set or modify the legacy, legacyRole, and external fields unless absolutely necessary and you understand all implications.

  • role

    Object role that contains the following field:

    • name

      Role name.

  • user

    Object user that contains the following field:

    • name

      Name of the iamuser object that the defined role is granted to. Not equal to the user name in Keycloak.

Configuration example:

apiVersion: iam.mirantis.com/v1alpha1
kind: IAMRoleBinding
metadata:
  namespace: nsone
  name: userone-operator
external: false
legacy: false
legacyRole: “”
role:
  name: operator
user:
  name: userone-f150d839
IAMClusterRoleBinding

IAMClusterRoleBinding is the namespaced object that represents a grant of one role to one user on one cluster in the namespace. This object is accessible to users that have either of the following bindings assigned to them:

  • IAMGlobalRoleBinding that binds them with the global-admin, operator, or user iamRole. For user, the bindings are read-only.

  • IAMRoleBinding that binds them with the operator or user iamRole in a particular namespace. For user, the bindings are read-only.

The IAMClusterRoleBinding object contains the following fields:

  • apiVersion

    API version of the object that is iam.mirantis.com/v1alpha1.

  • kind

    Object type that is IAMClusterRoleBinding.

  • metadata

    Object metadata that contains the following fields:

    • namespace

      Namespace of the cluster that the defined binding belongs to.

    • name

      Role binding name. If the role is user-created, user can set any unique name. If a name relates to a binding that is synced from Keycloak, the naming convention is <userName>-<roleName>-<clusterName>.

  • role

    Object role that contains the following field:

    • name

      Role name.

  • user

    Object user that contains the following field:

    • name

      Name of the iamuser object that the defined role is granted to. Not equal to the user name in Keycloak.

  • cluster

    Object cluster that contains the following field:

    • name

      Name of the cluster on which the defined role is granted.

  • legacy

    Defines whether the role binding is legacy. Possible values are true or false.

  • legacyRole

    Applicable when the legacy field value is true. Defines the legacy role name in Keycloak.

  • external

    Defines whether the role is assigned through Keycloak and is synced by user-controller with the Container Cloud API as the IAMGlobalRoleBinding object. Possible values are true or false.

Caution

If you create the IAM*RoleBinding, do not set or modify the legacy, legacyRole, and external fields unless absolutely necessary and you understand all implications.

Configuration example:

apiVersion: iam.mirantis.com/v1alpha1
kind: IAMClusterRoleBinding
metadata:
  namespace: nsone
  name: userone-clusterone-admin
role:
  name: cluster-admin
user:
  name: userone-f150d839
cluster:
  name: clusterone
legacy: false
legacyRole: “”
external: false

ClusterOIDCConfiguration resource for MKE

Available since 17.0.0, 16.0.0, and 14.1.0

This section contains description of the OpenID Connect (OIDC) custom resource for Mirantis Container Cloud that you can use to customize OIDC for Mirantis Kubernetes Engine (MKE) on managed clusters. Using this resource, add your own OIDC provider to authenticate user requests to Kubernetes. For OIDC provider requirements, see OIDC official specification.

The creation procedure of the ClusterOIDCConfiguration for a managed cluster is described in Add a custom OIDC provider for MKE.

The Container Cloud ClusterOIDCConfiguration custom resource contains the following fields:

  • apiVersion

    The API version of the object that is kaas.mirantis.com/v1alpha1.

  • kind

    The object type that is ClusterOIDCConfiguration.

  • metadata

    The metadata object field of the ClusterOIDCConfiguration resource contains the following fields:

    • name

      The object name.

    • namespace

      The project name (Kubernetes namespace) of the related managed cluster.

  • spec

    The spec object field of the ClusterOIDCConfiguration resource contains the following fields:

    • adminRoleCriteria

      Definition of the id_token claim with the admin role and the role value.

      • matchType

        Matching type of the claim with the requested role. Possible values that MKE uses to match the claim with the requested value:

        • must

          Requires a plain string in the id_token claim, for example, "iam_role": "mke-admin".

        • contains

          Requires an array of strings in the id_token claim, for example, "iam_role": ["mke-admin", "pod-reader"].

      • name

        Name of the admin id_token claim containing a role or array of roles.

      • value

        Role value that matches the "iam_role" value in the admin id_token claim.

    • caBundle

      Base64-encoded certificate authority bundle of the OIDC provider endpoint.

    • clientID

      ID of the OIDC client to be used by Kubernetes.

    • clientSecret

      Secret value of the clientID parameter. After the ClusterOIDCConfiguration object creation, this field is updated automatically with a reference to the corresponding Secret. For example:

      clientSecret:
      secret:
        key: value
        name: CLUSTER_NAME-wqbkj
      
    • issuer

      OIDC endpoint.

Configuration example:

apiVersion: kaas.mirantis.com/v1alpha1
kind: ClusterOIDCConfiguration
metadata:
  name: CLUSTER_NAME
  namespace: CLUSTER_NAMESPACE
spec:
  adminRoleCriteria:
    matchType: contains
    name: iam_roles
    value: mke-admin
  caBundle: BASE64_ENCODED_CA
  clientID: MY_CLIENT
  clientSecret:
    value: MY_SECRET
  issuer: https://auth.example.com/

UpdateGroup resource

Available since 2.27.0 (17.2.0 and 16.2.0)

This section describes the UpdateGroup custom resource (CR) used in the Container Cloud API to configure update concurrency for specific sets of machines or machine pools within a cluster. This resource enhances the update process by allowing a more granular control over the concurrency of machine updates. This resource also provides a way to control the reboot behavior of machines during a Cluster release update.

The Container Cloud UpdateGroup CR contains the following fields:

  • apiVersion

    API version of the object that is kaas.mirantis.com/v1alpha1.

  • kind

    Object type that is UpdateGroup.

  • metadata

    Metadata of the UpdateGroup CR that contains the following fields. All of them are required.

    • name

      Name of the UpdateGroup object.

    • namespace

      Project where the UpdateGroup is created.

    • labels

      Label to associate the UpdateGroup with a specific cluster in the cluster.sigs.k8s.io/cluster-name: <cluster-name> format.

  • spec

    Specification of the UpdateGroup CR that contains the following fields:

    • index

      Index to determine the processing order of the UpdateGroup object. Groups with the same index are processed concurrently.

      The update order of a machine within the same group is determined by the upgrade index of a specific machine. For details, see Change the upgrade order of a machine or machine pool.

    • concurrentUpdates

      Number of machines to update concurrently within UpdateGroup.

    • rebootIfUpdateRequires Since 2.28.0 (17.3.0 and 16.3.0)

      Technology Preview. Automatic reboot of controller or worker machines of an update group if a Cluster release update involves node reboot, for example, when kernel version update is available in new Cluster release. You can set this parameter for management or managed clusters.

      Boolean. By default, true on management clusters and false on managed clusters. On managed clusters:

      • If set to true, related machines are rebooted as part of a Cluster release update that requires a reboot.

      • If set to false, machines are not rebooted even if a Cluster release update requires a reboot.

      Caution

      During a distribution upgrade, machines are always rebooted, overriding rebootIfUpdateRequires: false.

Configuration example:

apiVersion: kaas.mirantis.com/v1alpha1
kind: UpdateGroup
metadata:
  name: update-group-example
  namespace: managed-ns
  labels:
    cluster.sigs.k8s.io/cluster-name: managed-cluster
spec:
  index: 10
  concurrentUpdates: 2
  rebootIfUpdateRequires: false

MCCUpgrade resource

This section describes the MCCUpgrade resource used in Mirantis Container Cloud API to configure a schedule for the Container Cloud update.

The Container Cloud MCCUpgrade CR contains the following fields:

  • apiVersion

    API version of the object that is kaas.mirantis.com/v1alpha1.

  • kind

    Object type that is MCCUpgrade.

  • metadata

    The metadata object field of the MCCUpgrade resource contains the following fields:

    • name

      The name of MCCUpgrade object, must be mcc-upgrade.

  • spec

    The spec object field of the MCCUpgrade resource contains the schedule when Container Cloud update is allowed or blocked. This field contains the following fields:

    • blockUntil

      Deprecated since Container Cloud 2.28.0 (Cluster release 16.3.0). Use autoDelay instead.

      Time stamp in the ISO 8601 format, for example, 2021-12-31T12:30:00-05:00. Updates will be disabled until this time. You cannot set this field to more than 7 days in the future and more than 30 days after the latest Container Cloud release.

    • autoDelay

      Available since Container Cloud 2.28.0 (Cluster release 16.3.0).

      Flag that enables delay of the management cluster auto-update to a new Container Cloud release and ensures that auto-update is not started immediately on the release date. Boolean, false by default.

      The delay period is minimum 20 days for each newly discovered release and depends on specifics of each release cycle and on optional configuration of week days and hours selected for update. You can verify the exact date of a scheduled auto-update in the status section of the MCCUpgrade object.

      Note

      Modifying the delay period is not supported.

    • timeZone

      Name of a time zone in the IANA Time Zone Database. This time zone will be used for all schedule calculations. For example: Europe/Samara, CET, America/Los_Angeles.

    • schedule

      List of schedule items that allow an update at specific hours or weekdays. The update process can proceed if at least one of these items allows it. Schedule items allow update when both hours and weekdays conditions are met. When this list is empty or absent, update is allowed at any hour of any day. Every schedule item contains the following fields:

      • hours

        Object with 2 fields: from and to. Both must be non-negative integers not greater than 24. The to field must be greater than the from one. Update is allowed if the current hour in the time zone specified by timeZone is greater or equals to from and is less than to. If hours is absent, update is allowed at any hour.

      • weekdays

        Object with boolean fields with these names:

        • monday

        • tuesday

        • wednesday

        • thursday

        • friday

        • saturday

        • sunday

        Update is allowed only on weekdays that have the corresponding field set to true. If all fields are false or absent, or weekdays is empty or absent, update is allowed on all weekdays.

    Full spec example:

    spec:
      autoDelay: true
      timeZone: CET
      schedule:
      - hours:
          from: 10
          to: 17
        weekdays:
          monday: true
          tuesday: true
      - hours:
          from: 7
          to: 10
        weekdays:
          monday: true
          friday: true
    

    In this example, all schedule calculations are done in the CET timezone and upgrades are allowed only:

    • From 7:00 to 17:00 on Mondays

    • From 10:00 to 17:00 on Tuesdays

    • From 7:00 to 10:00 on Fridays

  • status

    The status object field of the MCCUpgrade resource contains information about the next planned Container Cloud update, if available. This field contains the following fields:

    • nextAttempt Deprecated since 2.28.0 (Cluster release 16.3.0)

      Time stamp in the ISO 8601 format indicating the time when the Release Controller will attempt to discover and install a new Container Cloud release. Set to the next allowed time according to the schedule configured in spec or one minute in the future if the schedule currently allows update.

    • message Deprecated since 2.28.0 (Cluster release 16.3.0)

      Message from the last update step or attempt.

    • nextRelease

      Object describing the next release that Container Cloud will be updated to. Absent if no new releases have been discovered. Contains the following fields:

      • version

        Semver-compatible version of the next Container Cloud release, for example, 2.22.0.

      • date

        Time stamp in the ISO 8601 format of the Container Cloud release defined in version:

        • Since 2.28.0 (Cluster release 16.3.0), the field indicates the publish time stamp of a new release.

        • Before 2.28.0 (Cluster release 16.2.x or earlier), the field indicates the discovery time stamp of a new release.

      • scheduled

        Available since Container Cloud 2.28.0 (Cluster release 16.3.0). Time window that the pending Container Cloud release update is scheduled for:

        • startTime

          Time stamp in the ISO 8601 format indicating the start time of the update for the pending Container Cloud release.

        • endTime

          Time stamp in the ISO 8601 format indicating the end time of the update for the pending Container Cloud release.

    • lastUpgrade

      Time stamps of the latest Container Cloud update:

      • startedAt

        Time stamp in the ISO 8601 format indicating the time when the last Container Cloud update started.

      • finishedAt

        Time stamp in the ISO 8601 format indicating the time when the last Container Cloud update finished.

    • conditions

      Available since Container Cloud 2.28.0 (Cluster release 16.3.0). List of status conditions describing the status of the MCCUpgrade resource. Each condition has the following format:

      • type

        Condition type representing a particular aspect of the MCCUpgrade object. Currently, the only supported condition type is Ready that defines readiness to process a new release.

        If the status field of the Ready condition type is False, the Release Controller blocks the start of update operations.

      • status

        Condition status. Possible values: True, False, Unknown.

      • reason

        Machine-readable explanation of the condition.

      • lastTransitionTime

        Time of the latest condition transition.

      • message

        Human-readable description of the condition.

Example of MCCUpgrade status:

status:
  conditions:
  - lastTransitionTime: "2024-09-16T13:22:27Z"
    message: New release scheduled for upgrade
    reason: ReleaseScheduled
    status: "True"
    type: Ready
  lastUpgrade: {}
  message: ''
  nextAttempt: "2024-09-16T13:23:27Z"
  nextRelease:
    date: "2024-08-25T21:05:46Z"
    scheduled:
      endTime: "2024-09-17T00:00:00Z"
      startTime: "2024-09-16T00:00:00Z"
    version: 2.28.0

ClusterUpdatePlan resource

Available since 2.27.0 (17.2.0 and 16.2.0) TechPreview

This section describes the ClusterUpdatePlan custom resource (CR) used in the Container Cloud API to granularly control update process of a managed cluster by stopping the update after each step.

The ClusterUpdatePlan CR contains the following fields:

  • apiVersion

    API version of the object that is kaas.mirantis.com/v1alpha1.

  • kind

    Object type that is ClusterUpdatePlan.

  • metadata

    Metadata of the ClusterUpdatePlan CR that contains the following fields:

    • name

      Name of the ClusterUpdatePlan object.

    • namespace

      Project name of the cluster that relates to ClusterUpdatePlan.

  • spec

    Specification of the ClusterUpdatePlan CR that contains the following fields:

    • source

      Source name of the Cluster release from which the cluster is updated.

    • target

      Target name of the Cluster release to which the cluster is updated.

    • cluster

      Name of the cluster for which ClusterUpdatePlan is created.

    • releaseNotes

      Available since Container Cloud 2.29.0 (Cluster releases 17.4.0 and 16.4.0). Link to MOSK release notes of the target release.

    • steps

      List of update steps, where each step contains the following fields:

      • id

        Available since Container Cloud 2.28.0 (Cluster releases 17.3.0 and 16.3.0). Step ID.

      • name

        Step name.

      • description

        Step description.

      • constraints

        Description of constraints applied during the step execution.

      • impact

        Impact of the step on the cluster functionality and workloads. Contains the following fields:

        • users

          Impact on the Container Cloud user operations. Possible values: none, major, or minor.

        • workloads

          Impact on workloads. Possible values: none, major, or minor.

        • info

          Additional details on impact, if any.

      • duration

        Details about duration of the step execution. Contains the following fields:

        • estimated

          Estimated time to complete the update step.

          Note

          Before Container Cloud 2.29.0 (Cluster releases 17.4.0 and 16.4.0), this field was named eta.

        • info

          Additional details on update duration, if any.

      • granularity

        Information on the current step granularity. Indicates whether the current step is applied to each machine individually or to the entire cluster at once. Possible values are cluster or machine.

      • commence

        Flag that allows controlling the step execution. Boolean, false by default. If set to true, the step starts execution after all previous steps are completed.

        Caution

        Cancelling an already started update step is unsupported.

  • status

    Status of the ClusterUpdatePlan CR that contains the following fields:

    • startedAt

      Time when ClusterUpdatePlan has started.

    • completedAt

      Available since Container Cloud 2.29.0 (Cluster releases 17.4.0 and 16.4.0). Time of update completion.

    • status

      Overall object status.

    • steps

      List of step statuses in the same order as defined in spec. Each step status contains the following fields:

      • id

        Available since Container Cloud 2.28.0 (Cluster releases 17.3.0 and 16.3.0). Step ID.

      • name

        Step name.

      • status

        Step status. Possible values are:

        • NotStarted

          Step has not started yet.

        • Scheduled

          Available since Container Cloud 2.28.0 (Cluster releases 17.3.0 and 16.3.0). Step is already triggered but its execution has not started yet.

        • InProgress

          Step is currently in progress.

        • AutoPaused

          Available since Container Cloud 2.29.0 (Cluster release 17.4.0) as Technology Preview. Update is automatically paused by the trigger from a firing alert defined in the UpdateAutoPause configuration. For details, see UpdateAutoPause resource.

        • Stuck

          Step execution contains an issue, which also indicates that the step does not fit into the estimate defined in the duration field for this step in spec.

        • Completed

          Step has been completed.

      • message

        Message describing status details the current update step.

      • duration

        Current duration of the step execution.

      • startedAt

        Start time of the step execution.

Example of a ClusterUpdatePlan object:

apiVersion: kaas.mirantis.com/v1alpha1
kind: ClusterUpdatePlan
metadata:
  creationTimestamp: "2025-02-06T16:53:51Z"
  generation: 11
  name: mosk-17.4.0
  namespace: child
  resourceVersion: "6072567"
  uid: 82c072be-1dc5-43dd-b8cf-bc643206d563
spec:
  cluster: mosk
  releaseNotes: https://docs.mirantis.com/mosk/latest/25.1-series.html
  source: mosk-17-3-0-24-3
  steps:
  - commence: true
    description:
    - install new version of OpenStack and Tungsten Fabric life cycle management
      modules
    - OpenStack and Tungsten Fabric container images pre-cached
    - OpenStack and Tungsten Fabric control plane components restarted in parallel
    duration:
      estimated: 1h30m0s
      info:
      - 15 minutes to cache the images and update the life cycle management modules
      - 1h to restart the components
    granularity: cluster
    id: openstack
    impact:
      info:
      - some of the running cloud operations may fail due to restart of API services
        and schedulers
      - DNS might be affected
      users: minor
      workloads: minor
    name: Update OpenStack and Tungsten Fabric
  - commence: true
    description:
    - Ceph version update
    - restart Ceph monitor, manager, object gateway (radosgw), and metadata services
    - restart OSD services node-by-node, or rack-by-rack depending on the cluster
      configuration
    duration:
      estimated: 8m30s
      info:
      - 15 minutes for the Ceph version update
      - around 40 minutes to update Ceph cluster of 30 nodes
    granularity: cluster
    id: ceph
    impact:
      info:
      - 'minor unavailability of object storage APIs: S3/Swift'
      - workloads may experience IO performance degradation for the virtual storage
        devices backed by Ceph
      users: minor
      workloads: minor
    name: Update Ceph
  - commence: true
    description:
    - new host OS kernel and packages get installed
    - host OS configuration re-applied
    - container runtime version gets bumped
    - new versions of Kubernetes components installed
    duration:
      estimated: 1h40m0s
      info:
      - about 20 minutes to update host OS per a Kubernetes controller, nodes updated
        one-by-one
      - Kubernetes components update takes about 40 minutes, all nodes in parallel
    granularity: cluster
    id: k8s-controllers
    impact:
      users: none
      workloads: none
    name: Update host OS and Kubernetes components on master nodes
  - commence: true
    description:
    - new host OS kernel and packages get installed
    - host OS configuration re-applied
    - container runtime version gets bumped
    - new versions of Kubernetes components installed
    - data plane components (Open vSwitch and Neutron L3 agents, TF agents and vrouter)
      restarted on gateway and compute nodes
    - storage nodes put to “no-out” mode to prevent rebalancing
    - by default, nodes are updated one-by-one, a node group can be configured to
      update several nodes in parallel
    duration:
      estimated: 8h0m0s
      info:
      - host OS update - up to 15 minutes per node (not including host OS configuration
        modules)
      - Kubernetes components update - up to 15 minutes per node
      - OpenStack controllers and gateways updated one-by-one
      - nodes hosting Ceph OSD, monitor, manager, metadata, object gateway (radosgw)
        services updated one-by-one
    granularity: machine
    id: k8s-workers-vdrok-child-default
    impact:
      info:
      - 'OpenStack controller nodes: some running OpenStack operations might not
        complete due to restart of components'
      - 'OpenStack compute nodes: minor loss of the East-West connectivity with
        the Open vSwitch networking back end that causes approximately 5 min of
        downtime'
      - 'OpenStack gateway nodes: minor loss of the North-South connectivity with
        the Open vSwitch networking back end: a non-distributed HA virtual router
        needs up to 1 minute to fail over; a non-distributed and non-HA virtual
        router failover time depends on many factors and may take up to 10 minutes'
      users: major
      workloads: major
    name: Update host OS and Kubernetes components on worker nodes, group vdrok-child-default
  - commence: true
    description:
    - restart of StackLight, MetalLB services
    - restart of auxiliary controllers and charts
    duration:
      estimated: 1h30m0s
    granularity: cluster
    id: mcc-components
    impact:
      info:
      - minor cloud API downtime due restart of MetalLB components
      users: minor
      workloads: none
    name: Auxiliary components update
  target: mosk-17-4-0-25-1
status:
  completedAt: "2025-02-07T19:24:51Z"
  startedAt: "2025-02-07T17:07:02Z"
  status: Completed
  steps:
  - duration: 26m36.355605528s
    id: openstack
    message: Ready
    name: Update OpenStack and Tungsten Fabric
    startedAt: "2025-02-07T17:07:02Z"
    status: Completed
  - duration: 6m1.124356485s
    id: ceph
    message: Ready
    name: Update Ceph
    startedAt: "2025-02-07T17:33:38Z"
    status: Completed
  - duration: 24m3.151554465s
    id: k8s-controllers
    message: Ready
    name: Update host OS and Kubernetes components on master nodes
    startedAt: "2025-02-07T17:39:39Z"
    status: Completed
  - duration: 1h19m9.359184228s
    id: k8s-workers-vdrok-child-default
    message: Ready
    name: Update host OS and Kubernetes components on worker nodes, group vdrok-child-default
    startedAt: "2025-02-07T18:03:42Z"
    status: Completed
  - duration: 2m0.772243006s
    id: mcc-components
    message: Ready
    name: Auxiliary components update
    startedAt: "2025-02-07T19:22:51Z"
    status: Completed

UpdateAutoPause resource

Available since 2.29.0 (17.4.0) Technology Preview

This section describes the UpdateAutoPause custom resource (CR) used in the Container Cloud API to configure automatic pausing of cluster release updates in a managed cluster using StackLight alerts.

The Container Cloud UpdateAutoPause CR contains the following fields:

  • apiVersion

    API version of the object that is kaas.mirantis.com/v1alpha1.

  • kind

    Object type that is UpdateAutoPause.

  • metadata

    Metadata of the UpdateAutoPause CR that contains the following fields:

    • name

      Name of the UpdateAutoPause object. Must match the cluster name.

    • namespace

      Project where the UpdateAutoPause is created. Must match the cluster namespace.

  • spec

    Specification of the UpdateAutoPause CR that contains the following field:

    • alerts

      List of alert names. The occurrence of any alert from this list triggers auto-pause of the cluster release update.

  • status

    Status of the UpdateAutoPause CR that contains the following fields:

    • firingAlerts

      List of currently firing alerts from the specified set.

    • error

      Error message, if any, encountered during object processing.

Configuration example:

apiVersion: kaas.mirantis.com/v1alpha1
kind: UpdateAutoPause
metadata:
  name: example-cluster
  namespace: example-ns
spec:
  alerts:
    - KubernetesNodeNotReady
    - KubernetesContainerOOMKilled
status:
  firingAlerts:
    - KubernetesNodeNotReady
  error: ""

CacheWarmupRequest resource

TechPreview Available since 2.24.0 and 23.2 for MOSK clusters

This section describes the CacheWarmupRequest custom resource (CR) used in the Container Cloud API to predownload images and store them in the mcc-cache service.

The Container Cloud CacheWarmupRequest CR contains the following fields:

  • apiVersion

    API version of the object that is kaas.mirantis.com/v1alpha1.

  • kind

    Object type that is CacheWarmupRequest.

  • metadata

    The metadata object field of the CacheWarmupRequest resource contains the following fields:

    • name

      Name of the CacheWarmupRequest object that must match the existing management cluster name to which the warm-up operation applies.

    • namespace

      Container Cloud project in which the cluster is created. Always set to default as the only available project for management clusters creation.

  • spec

    The spec object field of the CacheWarmupRequest resource contains the settings for artifacts fetching and artifacts filtering through Cluster releases. This field contains the following fields:

    • clusterReleases

      Array of strings. Defines a set of Cluster release names to warm up in the mcc-cache service.

    • openstackReleases

      Optional. Array of strings. Defines a set of OpenStack releases to warm up in mcc-cache. Applicable only if ClusterReleases field contains mosk releases.

      If you plan to upgrade an OpenStack version, define the current and the target versions including the intermediate versions, if any. For example, to upgrade OpenStack from Victoria to Yoga:

      openstackReleases:
      - victoria
      - wallaby
      - xena
      - yoga
      
    • fetchRequestTimeout

      Optional. String. Time for a single request to download a single artifact. Defaults to 30m. For example, 1h2m3s.

    • clientsPerEndpoint

      Optional. Integer. Number of clients to use for fetching artifacts per each mcc-cache service endpoint. Defaults to 2.

    • openstackOnly

      Optional. Boolean. Enables fetching of the OpenStack-related artifacts for MOSK. Defaults to false. Applicable only if the ClusterReleases field contains mosk releases. Useful when you need to upgrade only an OpenStack version.

Example configuration:

apiVersion: kaas.mirantis.com/v1alpha1
kind: CacheWarmupRequest
metadata:
  name: example-cluster-name
  namespace: default
spec:
  clusterReleases:
  - mke-14-0-1
  - mosk-15-0-1
  openstackReleases:
  - yoga
  fetchRequestTimeout: 30m
  clientsPerEndpoint: 2
  openstackOnly: false

In this example:

  • The CacheWarmupRequest object is created for a management cluster named example-cluster-name.

  • The CacheWarmupRequest object is created in the only allowed default Container Cloud project.

  • Two Cluster releases mosk-15-0-1 and mke-14-0-1 will be predownloaded.

  • For mosk-15-0-1, only images related to the OpenStack version Yoga will be predownloaded.

  • Maximum time-out for a single request to download a single artifact is 30 minutes.

  • Two parallel workers will fetch artifacts per each mcc-cache service endpoint.

  • All artifacts will be fetched, not only those related to OpenStack.

GracefulRebootRequest resource

Available since 2.23.0 and 2.23.1 for MOSK 23.1

This section describes the GracefulRebootRequest custom resource (CR) used in the Container Cloud API for a rolling reboot of several or all cluster machines without workloads interruption. The resource is also useful for a bulk reboot of machines, for example, on large clusters.

The Container Cloud GracefulRebootRequest CR contains the following fields:

  • apiVersion

    API version of the object that is kaas.mirantis.com/v1alpha1.

  • kind

    Object type that is GracefulRebootRequest.

  • metadata

    Metadata of the GracefulRebootRequest CR that contains the following fields:

    • name

      Name of the GracefulRebootRequest object. The object name must match the name of the cluster on which you want to reboot machines.

    • namespace

      Project where the GracefulRebootRequest is created.

  • spec

    Specification of the GracefulRebootRequest CR that contains the following fields:

    • machines

      List of machines for a rolling reboot. Each machine of the list is cordoned, drained, rebooted, and uncordoned in the order of cluster upgrade policy. For details about the upgrade order, see Change the upgrade order of a machine or machine pool.

      Leave this field empty to reboot all cluster machines.

      Caution

      The cluster and machines must have the Ready status to perform a graceful reboot.

Configuration example:

apiVersion: kaas.mirantis.com/v1alpha1
kind: GracefulRebootRequest
metadata:
  name: demo-cluster
  namespace: demo-project
spec:
  machines:
  - demo-worker-machine-1
  - demo-worker-machine-3

ContainerRegistry resource

This section describes the ContainerRegistry custom resource (CR) used in Mirantis Container Cloud API to configure CA certificates on machines to access private Docker registries.

The Container Cloud ContainerRegistry CR contains the following fields:

  • apiVersion

    API version of the object that is kaas.mirantis.com/v1alpha1

  • kind

    Object type that is ContainerRegistry

  • metadata

    The metadata object field of the ContainerRegistry CR contains the following fields:

    • name

      Name of the container registry

    • namespace

      Project where the container registry is created

  • spec

    The spec object field of the ContainerRegistry CR contains the following fields:

    • domain

      Host name and optional port of the registry

    • CACert

      CA certificate of the registry in the base64-encoded format

Caution

Only one ContainerRegistry resource can exist per domain. To configure multiple CA certificates for the same domain, combine them into one certificate.

The ContainerRegistry resource example:

apiVersion: kaas.mirantis.com/v1alpha1
kind: ContainerRegistry
metadata:
  name: demoregistry
  namespace: test
spec:
  domain: demohost:5000
  CACert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0...

TLSConfig resource

This section describes the TLSConfig resource used in Mirantis Container Cloud API to configure TLS certificates for cluster applications.

Warning

The kubectl apply command automatically saves the applied data as plain text into the kubectl.kubernetes.io/last-applied-configuration annotation of the corresponding object. This may result in revealing sensitive data in this annotation when creating or modifying the object.

Therefore, do not use kubectl apply on this object. Use kubectl create, kubectl patch, or kubectl edit instead.

If you used kubectl apply on this object, you can remove the kubectl.kubernetes.io/last-applied-configuration annotation from the object using kubectl edit.

The Container Cloud TLSConfig CR contains the following fields:

  • apiVersion

    API version of the object that is kaas.mirantis.com/v1alpha1.

  • kind

    Object type that is TLSConfig.

  • metadata

    The metadata object field of the TLSConfig resource contains the following fields:

    • name

      Name of the public key.

    • namespace

      Project where the TLS certificate is created.

  • spec

    The spec object field contains the configuration to apply for an application. It contains the following fields:

    • serverName

      Host name of a server.

    • serverCertificate

      Certificate to authenticate server’s identity to a client. A valid certificate bundle can be passed. The server certificate must be on the top of the chain.

    • privateKey

      Reference to the Secret object that contains a private key. A private key is a key for the server. It must correspond to the public key used in the server certificate.

      • key

        Key name in the secret.

      • name

        Secret name.

    • caCertificate

      Certificate that issued the server certificate. The top-most intermediate certificate should be used if a CA certificate is unavailable.

Configuration example:

apiVersion: kaas.mirantis.com/v1alpha1
kind: TLSConfig
metadata:
  namespace: default
  name: keycloak
spec:
  caCertificate: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0...
  privateKey:
    secret:
      key: value
      name: keycloak-s7mcj
  serverCertificate: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0...
  serverName: keycloak.mirantis.com

Bare metal resources

This section contains descriptions and examples of the baremetal-based Kubernetes resources for Mirantis Container Cloud.

BareMetalHost

Private API since Container Cloud 2.29.0 (Cluster release 16.4.0)

Warning

Since Container Cloud 2.29.0 (Cluster release 16.4.0), use the BareMetalHostInventory resource instead of BareMetalHost for adding and modifying configuration of a bare metal server. Any change in the BareMetalHost object will be overwitten by BareMetalHostInventory.

For any existing BareMetalHost object, a BareMetalHostInventory object is created automatically during management cluster update to the Cluster release 16.4.0.

This section describes the BareMetalHost resource used in the Mirantis Container Cloud API. BareMetalHost object is being created for each Machine and contains all information about machine hardware configuration. BareMetalHost objects are used to monitor and manage the state of a bare metal server. This includes inspecting the host hardware, firmware, operating system provisioning, power control, server deprovision. When a machine is created, the bare metal provider assigns a BareMetalHost to that machine using labels and the BareMetalHostProfile configuration.

For demonstration purposes, the Container Cloud BareMetalHost custom resource (CR) can be split into the following major sections:

BareMetalHost metadata

The Container Cloud BareMetalHost CR contains the following fields:

  • apiVersion

    API version of the object that is metal3.io/v1alpha1.

  • kind

    Object type that is BareMetalHost.

  • metadata

    The metadata field contains the following subfields:

    • name

      Name of the BareMetalHost object.

    • namespace

      Project in which the BareMetalHost object was created.

    • annotations

      Available since Cluster releases 12.5.0, 11.5.0, and 7.11.0. Key-value pairs to attach additional metadata to the object:

      • kaas.mirantis.com/baremetalhost-credentials-name

        Key that connects the BareMetalHost object with a previously created BareMetalHostCredential object. The value of this key must match the BareMetalHostCredential object name.

      • host.dnsmasqs.metal3.io/address

        Available since Cluster releases 17.0.0 and 16.0.0. Key that assigns a particular IP address to a bare metal host during PXE provisioning.

      • baremetalhost.metal3.io/detached

        Available since Cluster releases 17.0.0 and 16.0.0. Key that pauses host management by the bare metal Operator for a manual IP address assignment.

        Note

        If the host provisioning has already started or completed, adding of this annotation deletes the information about the host from Ironic without triggering deprovisioning. The bare metal Operator recreates the host in Ironic once you remove the annotation. For details, see Metal3 documentation.

      • inspect.metal3.io/hardwaredetails-storage-sort-term

        Available since Cluster releases 17.0.0 and 16.0.0. Optional. Key that defines sorting of the bmh:status:storage[] list during inspection of a bare metal host. Accepts multiple tags separated by a comma or semi-column with the ASC/DESC suffix for sorting direction. Example terms: sizeBytes DESC, hctl ASC, type ASC, name DESC.

        Since Cluster releases 17.1.0 and 16.1.0, the following default value applies: hctl ASC, wwn ASC, by_id ASC, name ASC.

    • labels

      Labels used by the bare metal provider to find a matching BareMetalHost object to deploy a machine:

      • hostlabel.bm.kaas.mirantis.com/controlplane

      • hostlabel.bm.kaas.mirantis.com/worker

      • hostlabel.bm.kaas.mirantis.com/storage

      Each BareMetalHost object added using the Container Cloud web UI will be assigned one of these labels. If the BareMetalHost and Machine objects are created using API, any label may be used to match these objects for a bare metal host to deploy a machine.

      Warning

      Labels and annotations that are not documented in this API Reference are generated automatically by Container Cloud. Do not modify them using the Container Cloud API.

Configuration example:

apiVersion: metal3.io/v1alpha1
kind: BareMetalHost
metadata:
  name: master-0
  namespace: default
  labels:
    kaas.mirantis.com/baremetalhost-id: hw-master-0
    kaas.mirantis.com/baremetalhost-id: <bareMetalHostHardwareNodeUniqueId>
  annotations: # Since 2.21.0 (7.11.0, 12.5.0, 11.5.0)
    kaas.mirantis.com/baremetalhost-credentials-name: hw-master-0-credentials
BareMetalHost configuration

The spec section for the BareMetalHost object defines the desired state of BareMetalHost. It contains the following fields:

  • bmc

    Details for communication with the Baseboard Management Controller (bmc) module on a host. Contains the following subfields:

    • address

      URL for communicating with the BMC. URLs vary depending on the communication protocol and the BMC type, for example:

      • IPMI

        Default BMC type in the ipmi://<host>:<port> format. You can also use a plain <host>:<port> format. A port is optional if using the default port 623.

        You can change the IPMI privilege level from the default ADMINISTRATOR to OPERATOR with an optional URL parameter privilegelevel: ipmi://<host>:<port>?privilegelevel=OPERATOR.

      • Redfish

        BMC type in the redfish:// format. To disable TLS, you can use the redfish+http:// format. A host name or IP address and a path to the system ID are required for both formats. For example, redfish://myhost.example/redfish/v1/Systems/System.Embedded.1 or redfish://myhost.example/redfish/v1/Systems/1.

    • credentialsName

      Name of the secret containing the BareMetalHost object credentials.

      • Since Container Cloud 2.21.0 and 2.21.1 for MOSK 22.5, this field is updated automatically during cluster deployment. For details, see BareMetalHostCredential.

      • Before Container Cloud 2.21.0 or MOSK 22.5, the secret requires the username and password keys in the Base64 encoding.

    • disableCertificateVerification

      Boolean to skip certificate validation when true.

  • bootMACAddress

    MAC address for booting.

  • bootMode

    Boot mode: UEFI if UEFI is enabled and legacy if disabled.

  • online

    Defines whether the server must be online after provisioning is done.

    Warning

    Setting online: false to more than one bare metal host in a management cluster at a time can make the cluster non-operational.

Configuration example for Container Cloud 2.21.0 or later:

metadata:
  name: node-1-name
  annotations:
    kaas.mirantis.com/baremetalhost-credentials-name: node-1-credentials # Since Container Cloud 2.21.0
spec:
  bmc:
    address: 192.168.33.106:623
    credentialsName: ''
  bootMACAddress: 0c:c4:7a:a8:d3:44
  bootMode: legacy
  online: true

Configuration example for Container Cloud 2.20.1 or earlier:

metadata:
  name: node-1-name
spec:
  bmc:
    address: 192.168.33.106:623
    credentialsName: node-1-credentials-secret-f9g7d9f8h79
  bootMACAddress: 0c:c4:7a:a8:d3:44
  bootMode: legacy
  online: true
BareMetalHost status

The status field of the BareMetalHost object defines the current state of BareMetalHost. It contains the following fields:

  • errorMessage

    Last error message reported by the provisioning subsystem.

  • goodCredentials

    Last credentials that were validated.

  • hardware

    Hardware discovered on the host. Contains information about the storage, CPU, host name, firmware, and so on.

  • operationalStatus

    Status of the host:

    • OK

      Host is configured correctly and is manageable.

    • discovered

      Host is only partially configured. For example, the bmc address is discovered but not the login credentials.

    • error

      Host has any sort of error.

  • poweredOn

    Host availability status: powered on (true) or powered off (false).

  • provisioning

    State information tracked by the provisioner:

    • state

      Current action being done with the host by the provisioner.

    • id

      UUID of a machine.

  • triedCredentials

    Details of the last credentials sent to the provisioning backend.

Configuration example:

status:
  errorMessage: ""
  goodCredentials:
    credentials:
      name: master-0-bmc-secret
      namespace: default
    credentialsVersion: "13404"
  hardware:
    cpu:
      arch: x86_64
      clockMegahertz: 3000
      count: 32
      flags:
      - 3dnowprefetch
      - abm
      ...
      model: Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
    firmware:
      bios:
        date: ""
        vendor: ""
        version: ""
    hostname: ipa-fcab7472-892f-473c-85a4-35d64e96c78f
    nics:
    - ip: ""
      mac: 0c:c4:7a:a8:d3:45
      model: 0x8086 0x1521
      name: enp8s0f1
      pxe: false
      speedGbps: 0
      vlanId: 0
      ...
    ramMebibytes: 262144
    storage:
    - by_path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1
      hctl: "4:0:0:0"
      model: Micron_5200_MTFD
      name: /dev/sda
      rotational: false
      serialNumber: 18381E8DC148
      sizeBytes: 1920383410176
      vendor: ATA
      wwn: "0x500a07511e8dc148"
      wwnWithExtension: "0x500a07511e8dc148"
      ...
    systemVendor:
      manufacturer: Supermicro
      productName: SYS-6018R-TDW (To be filled by O.E.M.)
      serialNumber: E16865116300188
  operationalStatus: OK
  poweredOn: true
  provisioning:
    state: provisioned
  triedCredentials:
    credentials:
      name: master-0-bmc-secret
      namespace: default
    credentialsVersion: "13404"
BareMetalHostCredential

Available since 2.21.0 and 2.21.1 for MOSK 22.5

This section describes the BareMetalHostCredential custom resource (CR) used in the Mirantis Container Cloud API. The BareMetalHostCredential object is created for each BareMetalHostInventory and contains all information about the Baseboard Management Controller (bmc) credentials.

Note

Before update of the management cluster to Container Cloud 2.29.0 (Cluster release 16.4.0), instead of BareMetalHostInventory, use the BareMetalHost object. For details, see BareMetalHost.

Caution

While the Cluster release of the management cluster is 16.4.0, BareMetalHostInventory operations are allowed to m:kaas@management-admin only. Once the management cluster is updated to the Cluster release 16.4.1 (or later), this limitation will be lifted.

Warning

The kubectl apply command automatically saves the applied data as plain text into the kubectl.kubernetes.io/last-applied-configuration annotation of the corresponding object. This may result in revealing sensitive data in this annotation when creating or modifying the object.

Therefore, do not use kubectl apply on this object. Use kubectl create, kubectl patch, or kubectl edit instead.

If you used kubectl apply on this object, you can remove the kubectl.kubernetes.io/last-applied-configuration annotation from the object using kubectl edit.

For demonstration purposes, the BareMetalHostCredential CR can be split into the following sections:

BareMetalHostCredential metadata

The BareMetalHostCredential metadata contains the following fields:

  • apiVersion

    API version of the object that is kaas.mirantis.com/v1alpha1

  • kind

    Object type that is BareMetalHostCredential

  • metadata

    The metadata field contains the following subfields:

    • name

      Name of the BareMetalHostCredential object

    • namespace

      Container Cloud project in which the related BareMetalHostInventory object was created

    • labels

      Labels used by the bare metal provider:

      • kaas.mirantis.com/region

        Region name

        Note

        The kaas.mirantis.com/region label is removed from all Container Cloud objects in 2.26.0 (Cluster releases 17.1.0 and 16.1.0). Therefore, do not add the label starting these releases. On existing clusters updated to these releases, or if manually added, this label will be ignored by Container Cloud.

BareMetalHostCredential configuration

The spec section for the BareMetalHostCredential object contains sensitive information that is moved to a separate Secret object during cluster deployment:

  • username

    User name of the bmc account with administrator privileges to control the power state and boot source of the bare metal host

  • password

    Details on the user password of the bmc account with administrator privileges:

    • value

      Password that will be automatically removed once saved in a separate Secret object

    • name

      Name of the Secret object where credentials are saved

The BareMetalHostCredential object creation triggers the following automatic actions:

  1. Create an underlying Secret object containing data about username and password of the bmc account of the related BareMetalHostCredential object.

  2. Erase sensitive password data of the bmc account from the BareMetalHostCredential object.

  3. Add the created Secret object name to the spec.password.name section of the related BareMetalHostCredential object.

  4. Update BareMetalHostInventory.spec.bmc.bmhCredentialsName with the BareMetalHostCredential object name.

    Note

    Before Container Cloud 2.29.0 (17.4.0 and 16.4.0), BareMetalHost.spec.bmc.credentialsName was updated with the BareMetalHostCredential object name.

Note

When you delete a BareMetalHostInventory object, the related BareMetalHostCredential object is deleted automatically.

Note

On existing clusters, a BareMetalHostCredential object is automatically created for each BareMetalHostInventory object during a cluster update.

Example of BareMetalHostCredential before the cluster deployment starts:

apiVersion: kaas.mirantis.com/v1alpha1
kind: BareMetalHostCredential
metadata:
  name: hw-master-0-credetnials
  namespace: default
spec:
  username: admin
  password:
    value: superpassword

Example of BareMetalHostCredential created during cluster deployment:

apiVersion: kaas.mirantis.com/v1alpha1
kind: BareMetalHostCredential
metadata:
  name: hw-master-0-credetnials
  namespace: default
spec:
  username: admin
  password:
    name: secret-cv98n7c0vb9
BareMetalHostInventory

Available since Container Cloud 2.29.0 (Cluster release 16.4.0)

Note

Before update of the management cluster to Container Cloud 2.29.0 (Cluster release 16.4.0), instead of BareMetalHostInventory, use the BareMetalHost object. For details, see BareMetalHost.

Caution

While the Cluster release of the management cluster is 16.4.0, BareMetalHostInventory operations are allowed to m:kaas@management-admin only. Once the management cluster is updated to the Cluster release 16.4.1 (or later), this limitation will be lifted.

This section describes the BareMetalHostInventory resource used in the Mirantis Container Cloud API to monitor and manage the state of a bare metal server. This includes inspecting the host hardware, firmware, operating system provisioning, power control, and server deprovision. The BareMetalHostInventory object is created for each Machine and contains all information about machine hardware configuration.

Each BareMetalHostInventory object is synchronized with an automatically created BareMetalHost object, which is used for internal purposes of the Container Cloud private API.

Use the BareMetalHostInventory object instead of BareMetalHost for adding and modifying configuration of a bare metal server.

Caution

Any change in the BareMetalHost object will be overwitten by BareMetalHostInventory.

For any existing BareMetalHost object, a BareMetalHostInventory object is created automatically during management cluster update to Container Cloud 2.29.0 (Cluster release 16.4.0).

For demonstration purposes, the Container Cloud BareMetalHostInventory custom resource (CR) can be split into the following major sections:

BareMetalHostInventory metadata

The BareMetalHostInventory CR contains the following fields:

  • apiVersion

    API version of the object that is kaas.mirantis.com/v1alpha1.

  • kind

    Object type that is BareMetalHostInventory.

  • metadata

    The metadata field contains the following subfields:

    • name

      Name of the BareMetalHostInventory object.

    • namespace

      Project in which the BareMetalHostInventory object was created.

    • annotations

      • host.dnsmasqs.metal3.io/address

        Key that assigns a particular IP address to a bare metal host during PXE provisioning. For details, see Manually allocate IP addresses for bare metal hosts.

      • baremetalhost.metal3.io/detached

        Key that pauses host management by the bare metal Operator for a manual IP address assignment.

        Note

        If the host provisioning has already started or completed, adding of this annotation deletes the information about the host from Ironic without triggering deprovisioning. The bare metal Operator recreates the host in Ironic once you remove the annotation. For details, see Metal3 documentation.

      • inspect.metal3.io/hardwaredetails-storage-sort-term

        Optional. Key that defines sorting of the bmh:status:storage[] list during inspection of a bare metal host. Accepts multiple tags separated by a comma or semi-column with the ASC/DESC suffix for sorting direction. Example terms: sizeBytes DESC, hctl ASC, type ASC, name DESC.

        The default value is hctl ASC, wwn ASC, by_id ASC, name ASC.

    • labels

      Labels used by the bare metal provider to find a matching BareMetalHostInventory object for machine deployment. For example:

      • hostlabel.bm.kaas.mirantis.com/controlplane

      • hostlabel.bm.kaas.mirantis.com/worker

      • hostlabel.bm.kaas.mirantis.com/storage

      Warning

      Labels and annotations that are not documented in this API Reference are generated automatically by Container Cloud. Do not modify them using the Container Cloud API.

Configuration example:

apiVersion: kaas.mirantis.com/v1alpha1
kind: BareMetalHostInventory
metadata:
  name: master-0
  namespace: default
  labels:
    kaas.mirantis.com/baremetalhost-id: hw-master-0
  annotations:
    inspect.metal3.io/hardwaredetails-storage-sort-term: hctl ASC, wwn ASC, by_id ASC, name ASC
BareMetalHostInventory configuration

The spec section for the BareMetalHostInventory object defines the required state of BareMetalHostInventory. It contains the following fields:

  • bmc

    Details for communication with the Baseboard Management Controller (bmc) module on a host. Contains the following subfields:

    • address

      URL for communicating with the BMC. URLs vary depending on the communication protocol and the BMC type. For example:

      • IPMI

        Default BMC type in the ipmi://<host>:<port> format. You can also use a plain <host>:<port> format. A port is optional if using the default port 623.

        You can change the IPMI privilege level from the default ADMINISTRATOR to OPERATOR with an optional URL parameter privilegelevel: ipmi://<host>:<port>?privilegelevel=OPERATOR.

      • Redfish

        BMC type in the redfish:// format. To disable TLS, you can use the redfish+http:// format. A host name or IP address and a path to the system ID are required for both formats. For example, redfish://myhost.example/redfish/v1/Systems/System.Embedded.1 or redfish://myhost.example/redfish/v1/Systems/1.

    • bmhCredentialsName

      Name of the BareMetalHostCredentials object.

    • disableCertificateVerification

      Key that disables certificate validation. Boolean, false by default. When true, the validation is skipped.

  • bootMACAddress

    MAC address for booting.

  • bootMode

    Boot mode: UEFI if UEFI is enabled and legacy if disabled.

  • online

    Defines whether the server must be online after provisioning is done.

    Warning

    Setting online: false to more than one bare metal host in a management cluster at a time can make the cluster non-operational.

Configuration example:

metadata:
  name: master-0
spec:
  bmc:
    address: 192.168.33.106:623
    bmhCredentialsName: 'master-0-bmc-credentials'
  bootMACAddress: 0c:c4:7a:a8:d3:44
  bootMode: legacy
  online: true
BareMetalHostInventory status

The status field of the BareMetalHostInventory object defines the current state of BareMetalHostInventory. It contains the following fields:

  • errorMessage

    Latest error message reported by the provisioning subsystem.

  • errorCount

    Number of errors that the host has encountered since the last successful operation.

  • operationalStatus

    Status of the host:

    • OK

      Host is configured correctly and is manageable.

    • discovered

      Host is only partially configured. For example, the bmc address is discovered but the login credentials are not.

    • error

      Host has any type of error.

  • poweredOn

    Host availability status that is powered on (true) or powered off (false).

  • operationHistory

    Key that contains information about performed operations.

Status example:

status:
  errorCount: 0
  errorMessage: ""
  operationHistory:
    deprovision:
      end: null
      start: null
    inspect:
      end: "2025-01-01T00:00:00Z"
      start: "2025-01-01T00:00:00Z"
    provision:
      end: "2025-01-01T00:00:00Z"
      start: "2025-01-01T00:00:00Z"
    register:
      end: "2025-01-01T00:00:00Z"
      start: "2025-01-01T00:00:00Z"
  operationalStatus: OK
  poweredOn: true
BareMetalHostProfile

This section describes the BareMetalHostProfile resource used in Mirantis Container Cloud API to define how the storage devices and operating system are provisioned and configured.

For demonstration purposes, the Container Cloud BareMetalHostProfile custom resource (CR) is split into the following major sections:

metadata

The Container Cloud BareMetalHostProfile CR contains the following fields:

  • apiVersion

    API version of the object that is metal3.io/v1alpha1.

  • kind

    Object type that is BareMetalHostProfile.

  • metadata

    The metadata field contains the following subfields:

    • name

      Name of the bare metal host profile.

    • namespace

      Project in which the bare metal host profile was created.

Configuration example:

apiVersion: metal3.io/v1alpha1
kind: BareMetalHostProfile
metadata:
  name: default
  namespace: default
spec

The spec field of BareMetalHostProfile object contains the fields to customize your hardware configuration:

Warning

Any data stored on any device defined in the fileSystems list can be deleted or corrupted during cluster (re)deployment. It happens because each device from the fileSystems list is a part of the rootfs directory tree that is overwritten during (re)deployment.

Examples of affected devices include:

  • A raw device partition with a file system on it

  • A device partition in a volume group with a logical volume that has a file system on it

  • An mdadm RAID device with a file system on it

  • An LVM RAID device with a file system on it

The wipe field (deprecated) or wipeDevice structure (recommended since Container Cloud 2.26.0) have no effect in this case and cannot protect data on these devices.

Therefore, to prevent data loss, move the necessary data from these file systems to another server beforehand, if required.

  • devices

    List of definitions of the physical storage devices. To configure more than three storage devices per host, add additional devices to this list. Each device in the list can have one or more partitions defined by the list in the partitions field.

    • Each device in the list must have the following fields in the properties section for device handling:

      • workBy (recommended, string)

        Defines how the device should be identified. Accepts a comma-separated string with the following recommended value (in order of priority): by_id,by_path,by_wwn,by_name. Since 2.25.1, this value is set by default.

      • wipeDevice (recommended, object)

        Available since Container Cloud 2.26.0 (Cluster releases 17.1.0 and 16.1.0). Enables and configures cleanup of a device or its metadata before cluster deployment. Contains the following fields:

        • eraseMetadata (dictionary)

          Enables metadata cleanup of a device. Contains the following field:

          • enabled (boolean)

            Enables the eraseMetadata option. False by default.

        • eraseDevice (dictionary)

          Configures a complete cleanup of a device. Contains the following fields:

          • blkdiscard (object)

            Executes the blkdiscard command on the target device to discard all data blocks. Contains the following fields:

            • enabled (boolean)

              Enables the blkdiscard option. False by default.

            • zeroout (string)

              Configures writing of zeroes to each block during device erasure. Contains the following options:

              • fallback - default, blkdiscard attempts to write zeroes only if the device does not support the block discard feature. In this case, the blkdiscard command is re-executed with an additional --zeroout flag.

              • always - always write zeroes.

              • never - never write zeroes.

          • userDefined (object)

            Enables execution of a custom command or shell script to erase the target device. Contains the following fields:

            • enabled (boolean)

              Enables the userDefined option. False by default.

            • command (string)

              Defines a command to erase the target device. Empty by default. Mutually exclusive with script. For the command execution, the ansible.builtin.command module is called.

            • script (string)

              Defines a plain-text script allowing pipelines (|) to erase the target device. Empty by default. Mutually exclusive with command. For the script execution, the ansible.builtin.shell module is called.

            When executing a command or a script, you can use the following environment variables:

            • DEVICE_KNAME (always defined by Ansible)

              Device kernel path, for example, /dev/sda

            • DEVICE_BY_NAME (optional)

              Link from /dev/disk/by-name/ if it was added by udev

            • DEVICE_BY_ID (optional)

              Link from /dev/disk/by-id/ if it was added by udev

            • DEVICE_BY_PATH (optional)

              Link from /dev/disk/by-path/ if it was added by udev

            • DEVICE_BY_WWN (optional)

              Link from /dev/disk/by-wwn/ if it was added by udev

        For configuration details, see Wipe a device or partition.

      • wipe (boolean, deprecated)

        Defines whether the device must be wiped of the data before being used.

        Note

        This field is deprecated since Container Cloud 2.26.0 (Cluster releases 17.1.0 and 16.1.0) for the sake of wipeDevice and will be removed in one of the following releases.

        For backward compatibility, any existing wipe: true option is automatically converted to the following structure:

        wipeDevice:
          eraseMetadata:
            enabled: True
        

        Before Container Cloud 2.26.0, the wipe field is mandatory.

    • Each device in the list can have the following fields in its properties section that affect the selection of the specific device when the profile is applied to a host:

      • type (optional, string)

        The device type. Possible values: hdd, ssd, nvme. This property is used to filter selected devices by type.

      • partflags (optional, string)

        Extra partition flags to be applied on a partition. For example, bios_grub.

      • minSizeGiB, maxSizeGiB (deprecated, optional, string)

        The lower and upper limit of the selected device size. Only the devices matching these criteria are considered for allocation. Omitted parameter means no upper or lower limit.

        The minSize and maxSize parameter names are also available for the same purpose.

        Caution

        Mirantis recommends using only one parameter name type and units throughout the configuration files. If both sizeGiB and size are used, sizeGiB is ignored during deployment and the suffix is adjusted accordingly. For example, 1.5Gi will be serialized as 1536Mi. The size without units is counted in bytes. For example, size: 120 means 120 bytes.

        Since Container Cloud 2.26.0 (Cluster releases 17.1.0 and 16.1.0), minSizeGiB and maxSizeGiB are deprecated. Instead of floats that define sizes in GiB for *GiB fields, use the <sizeNumber>Gi text notation (Ki, Mi, and so on). All newly created profiles are automatically migrated to the Gi syntax. In existing profiles, migrate the syntax manually.

      • byName (forbidden in new profiles since 2.27.0, optional, string)

        The specific device name to be selected during provisioning, such as dev/sda.

        Warning

        With NVME devices and certain hardware disk controllers, you cannot reliably select such device by the system name. Therefore, use a more specific byPath, serialNumber, or wwn selector.

        Caution

        Since Container Cloud 2.26.0 (Cluster releases 17.1.0 and 16.1.0), byName is deprecated. Since Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0), byName is blocked by admission-controller in new BareMetalHostProfile objects. As a replacement, use a more specific selector, such as byPath, serialNumber, or wwn.

      • byPath (optional, string) Since 2.26.0 (17.1.0, 16.1.0)

        The specific device name with its path to be selected during provisioning, such as /dev/disk/by-path/pci-0000:00:07.0.

      • serialNumber (optional, string) Since 2.26.0 (17.1.0, 16.1.0)

        The specific serial number of a physical disk to be selected during provisioning, such as S2RBNXAH116186E.

      • wwn (optional, string) Since 2.26.0 (17.1.0, 16.1.0)

        The specific World Wide Name number of a physical disk to be selected during provisioning, such as 0x5002538d409aeeb4.

        Warning

        When using strict filters, such as byPath, serialNumber, or wwn, Mirantis strongly recommends not combining them with a soft filter, such as minSize / maxSize. Use only one approach.

  • softRaidDevices Tech Preview

    List of definitions of a software-based Redundant Array of Independent Disks (RAID) created by mdadm. Use the following fields to describe an mdadm RAID device:

    • name (mandatory, string)

      Name of a RAID device. Supports the following formats:

      • dev path, for example, /dev/md0.

      • simple name, for example, raid-name that will be created as /dev/md/raid-name on the target OS.

    • devices (mandatory, list)

      List of partitions from the devices list. Expand the resulting list of devices into at least two partitions.

    • level (optional, string)

      Level of a RAID device, defaults to raid1. Possible values: raid1, raid0, raid10.

    • metadata (optional, string)

      Metadata version of RAID, defaults to 1.0. Possible values: 1.0, 1.1, 1.2. For details about the differences in metadata, see man 8 mdadm.

      Warning

      The EFI system partition partflags: ['esp'] must be a physical partition in the main partition table of the disk, not under LVM or mdadm software RAID.

  • fileSystems

    List of file systems. Each file system can be created on top of either device, partition, or logical volume. If more file systems are required for additional devices, define them in this field. Each fileSystems in the list has the following fields:

    • fileSystem (mandatory, string)

      Type of a file system to create on a partition. For example, ext4, vfat.

    • mountOpts (optional, string)

      Comma-separated string of mount options. For example, rw,noatime,nodiratime,lazytime,nobarrier,commit=240,data=ordered.

    • mountPoint (optional, string)

      Target mount point for a file system. For example, /mnt/local-volumes/.

    • partition (optional, string)

      Partition name to be selected for creation from the list in the devices section. For example, uefi.

    • logicalVolume (optional, string)

      LVM logical volume name if the file system is supposed to be created on an LVM volume defined in the logicalVolumes section. For example, lvp.

  • logicalVolumes

    List of LVM logical volumes. Every logical volume belongs to a volume group from the volumeGroups list and has the size attribute for a size in the corresponding units.

    You can also add a software-based RAID raid1 created by LVM using the following fields:

    • name (mandatory, string)

      Name of a logical volume.

    • vg (mandatory, string)

      Name of a volume group that must be a name from the volumeGroups list.

    • sizeGiB or size (mandatory, string)

      Size of a logical volume in gigabytes. When set to 0, all available space on the corresponding volume group will be used. The 0 value equals -l 100%FREE in the lvcreate command.

    • type (optional, string)

      Type of a logical volume. If you require a usual logical volume, you can omit this field.

      Possible values:

      • linear

        Default. A usual logical volume. This value is implied for bare metal host profiles created using the Container Cloud release earlier than 2.12.0 where the type field is unavailable.

      • raid1 Tech Preview

        Serves to build the raid1 type of LVM. Equals to the lvcreate --type raid1... command. For details, see man 8 lvcreate and man 7 lvmraid.

      Caution

      Mirantis recommends using only one parameter name type and units throughout the configuration files. If both sizeGiB and size are used, sizeGiB is ignored during deployment and the suffix is adjusted accordingly. For example, 1.5Gi will be serialized as 1536Mi. The size without units is counted in bytes. For example, size: 120 means 120 bytes.

  • volumeGroups

    List of definitions of LVM volume groups. Each volume group contains one or more devices or partitions from the devices list. Contains the following field:

    • devices (mandatory, list)

      List of partitions to be used in a volume group. For example:

      - partition: lvm_root_part1
      - partition: lvm_root_part2
      

      Must contain the following field:

      • name (mandatory, string)

        Name of a volume group to be created. For example: lvm_root.

  • preDeployScript (optional, string)

    Shell script that executes on a host before provisioning the target operating system inside the ramfs system.

  • postDeployScript (optional, string)

    Shell script that executes on a host after deploying the operating system inside the ramfs system that is chrooted to the target operating system. To use a specific default gateway (for example, to have Internet access) on this stage, refer to MOSK Deployment Guide: Configure multiple DHCP address ranges.

  • grubConfig (optional, object)

    Set of options for the Linux GRUB bootloader on the target operating system. Contains the following field:

    • defaultGrubOptions (optional, array)

      Set of options passed to the Linux GRUB bootloader. Each string in the list defines one parameter. For example:

      defaultGrubOptions:
      - GRUB_DISABLE_RECOVERY="true"
      - GRUB_PRELOAD_MODULES=lvm
      - GRUB_TIMEOUT=20
      
  • kernelParameters:sysctl (optional, object)

    List of kernel sysctl options passed to /etc/sysctl.d/999-baremetal.conf during a bmh provisioning. For example:

    kernelParameters:
      sysctl:
        fs.aio-max-nr: "1048576"
        fs.file-max: "9223372036854775807"
    

    For the list of options prohibited to change, refer to MKE documentation: Set up kernel default protections.

    Note

    If asymmetric traffic is expected on some of the managed cluster nodes, enable the loose mode for the corresponding interfaces on those nodes by setting the net.ipv4.conf.<interface-name>.rp_filter parameter to "2" in the kernelParameters.sysctl section. For example:

    kernelParameters:
      sysctl:
        net.ipv4.conf.k8s-lcm.rp_filter: "2"
    
  • kernelParameters:modules (optional, object)

    List of options for kernel modules to be passed to /etc/modprobe.d/{filename} during a bare metal host provisioning. For example:

    kernelParameters:
      modules:
      - content: |
          options kvm_intel nested=1
        filename: kvm_intel.conf
    
Configuration example with strict filtering for device - applies since 2.26.0 (17.1.0 and 16.1.0)
spec:
  devices:
  - device:
      wipe: true
      workBy: by_wwn,by_path,by_id,by_name
      wwn: "0x5002538d409aeeb4"
    partitions:
    - name: bios_grub
      partflags:
      - bios_grub
      size: 4Mi
      wipe: true
    - name: uefi
      partflags:
      - esp
      size: 200Mi
      wipe: true
    - name: config-2
      size: 64Mi
      wipe: true
    - name: lvm_root_part
      size: 0
      wipe: true
  - device:
      byPath: /dev/disk/by-path/pci-0000:00:1f.2-ata-1
      minSize: 30Gi
      wipe: true
      workBy: by_id,by_path,by_wwn,by_name
    partitions:
    - name: lvm_lvp_part1
      size: 0
      wipe: true
  - device:
      byPath: /dev/disk/by-path/pci-0000:00:1f.2-ata-3
      minSize: 30Gi
      wipe: true
      workBy: by_id,by_path,by_wwn,by_name
    partitions:
    - name: lvm_lvp_part2
      size: 0
      wipe: true
  - device:
      serialNumber: 'Z1X69DG6'
      wipe: true
      workBy: by_id,by_path,by_wwn,by_name
  fileSystems:
  - fileSystem: vfat
    partition: config-2
  - fileSystem: vfat
    mountPoint: /boot/efi
    partition: uefi
  - fileSystem: ext4
    logicalVolume: root
    mountPoint: /
  - fileSystem: ext4
    logicalVolume: lvp
    mountPoint: /mnt/local-volumes/
  grubConfig:
    defaultGrubOptions:
    - GRUB_DISABLE_RECOVERY="true"
    - GRUB_PRELOAD_MODULES=lvm
    - GRUB_TIMEOUT=5
  ...
  logicalVolumes:
  - name: root
    size: 0
    type: linear
    vg: lvm_root
  - name: lvp
    size: 0
    type: linear
    vg: lvm_lvp
  ...
  volumeGroups:
  - devices:
    - partition: lvm_root_part
    name: lvm_root
  - devices:
    - partition: lvm_lvp_part1
    - partition: lvm_lvp_part2
    name: lvm_lvp
General configuration example with the wipeDevice option for devices - applies since 2.26.0 (17.1.0 and 16.1.0)
spec:
  devices:
  - device:
      wipeDevice:
        eraseMetadata:
          enabled: true
      workBy: by_wwn,by_path,by_id,by_name
    partitions:
    - name: bios_grub
      partflags:
      - bios_grub
      size: 4Mi
    - name: uefi
      partflags:
      - esp
      size: 200Mi
    - name: config-2
      size: 64Mi
    - name: lvm_root_part
      size: 0
  - device:
      minSize: 30Gi
      wipeDevice:
        eraseMetadata:
          enabled: true
      workBy: by_id,by_path,by_wwn,by_name
    partitions:
    - name: lvm_lvp_part1
      size: 0
      wipe: true
  - device:
      minSize: 30Gi
      wipeDevice:
        eraseMetadata:
          enabled: true
      workBy: by_id,by_path,by_wwn,by_name
    partitions:
    - name: lvm_lvp_part2
      size: 0
  - device:
      wipeDevice:
        eraseMetadata:
          enabled: true
      workBy: by_id,by_path,by_wwn,by_name
  fileSystems:
  - fileSystem: vfat
    partition: config-2
  - fileSystem: vfat
    mountPoint: /boot/efi
    partition: uefi
  - fileSystem: ext4
    logicalVolume: root
    mountPoint: /
  - fileSystem: ext4
    logicalVolume: lvp
    mountPoint: /mnt/local-volumes/
  grubConfig:
    defaultGrubOptions:
    - GRUB_DISABLE_RECOVERY="true"
    - GRUB_PRELOAD_MODULES=lvm
    - GRUB_TIMEOUT=5
  ...
  logicalVolumes:
  - name: root
    size: 0
    type: linear
    vg: lvm_root
  - name: lvp
    size: 0
    type: linear
    vg: lvm_lvp
  ...
  volumeGroups:
  - devices:
    - partition: lvm_root_part
    name: lvm_root
  - devices:
    - partition: lvm_lvp_part1
    - partition: lvm_lvp_part2
    name: lvm_lvp
General configuration example with the deprecated wipe option for devices - applies before 2.26.0 (17.1.0 and 16.1.0)
spec:
  devices:
   - device:
       #byName: /dev/sda
       minSize: 61GiB
       wipe: true
       workBy: by_wwn,by_path,by_id,by_name
     partitions:
       - name: bios_grub
         partflags:
         - bios_grub
         size: 4Mi
         wipe: true
       - name: uefi
         partflags: ['esp']
         size: 200Mi
         wipe: true
       - name: config-2
         # limited to 64Mb
         size: 64Mi
         wipe: true
       - name: md_root_part1
         wipe: true
         partflags: ['raid']
         size: 60Gi
       - name: lvm_lvp_part1
         wipe: true
         partflags: ['raid']
         # 0 Means, all left space
         size: 0
   - device:
       #byName: /dev/sdb
       minSize: 61GiB
       wipe: true
       workBy: by_wwn,by_path,by_id,by_name
     partitions:
       - name: md_root_part2
         wipe: true
         partflags: ['raid']
         size: 60Gi
       - name: lvm_lvp_part2
         wipe: true
         # 0 Means, all left space
         size: 0
   - device:
       #byName: /dev/sdc
       minSize: 30Gib
       wipe: true
       workBy: by_wwn,by_path,by_id,by_name
  softRaidDevices:
    - name: md_root
      metadata: "1.2"
      devices:
        - partition: md_root_part1
        - partition: md_root_part2
  volumeGroups:
    - name: lvm_lvp
      devices:
        - partition: lvm_lvp_part1
        - partition: lvm_lvp_part2
  logicalVolumes:
    - name: lvp
      vg: lvm_lvp
      # Means, all left space
      sizeGiB: 0
  postDeployScript: |
    #!/bin/bash -ex
    echo $(date) 'post_deploy_script done' >> /root/post_deploy_done
  preDeployScript: |
    #!/bin/bash -ex
    echo 'ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="deadline"' > /etc/udev/rules.d/60-ssd-scheduler.rules
    echo $(date) 'pre_deploy_script done' >> /root/pre_deploy_done
  fileSystems:
    - fileSystem: vfat
      partition: config-2
    - fileSystem: vfat
      partition: uefi
      mountPoint: /boot/efi/
    - fileSystem: ext4
      softRaidDevice: md_root
      mountPoint: /
    - fileSystem: ext4
      logicalVolume: lvp
      mountPoint: /mnt/local-volumes/
  grubConfig:
    defaultGrubOptions:
    - GRUB_DISABLE_RECOVERY="true"
    - GRUB_PRELOAD_MODULES=lvm
    - GRUB_TIMEOUT=20
  kernelParameters:
    sysctl:
    # For the list of options prohibited to change, refer to
    # https://docs.mirantis.com/mke/3.7/install/predeployment/set-up-kernel-default-protections.html
      kernel.dmesg_restrict: "1"
      kernel.core_uses_pid: "1"
      fs.file-max: "9223372036854775807"
      fs.aio-max-nr: "1048576"
      fs.inotify.max_user_instances: "4096"
      vm.max_map_count: "262144"
    modules:
      - filename: kvm_intel.conf
        content: |
          options kvm_intel nested=1
Mounting recommendations for the /var directory

During volume mounts, Mirantis strongly advises against mounting the entire /var directory to a separate disk or partition. Otherwise, the cloud-init service may fail to configure the target host system during the first boot.

This recommendation allows preventing the following cloud-init issue related to asynchronous mount in systemd with ignoring dependency:

  1. System boots the / mounts.

  2. The cloud-init service starts and processes data in /var/lib/cloud-init, which currently references [/]var/lib/cloud-init.

  3. The systemd service mounts /var/lib/cloud-init and breaks the cloud-init service logic.

Recommended configuration example for /var/lib/nova
spec:
  devices:
    ...
    - device:
        serialNumber: BTWA516305VE480FGN
        type: ssd
        wipeDevice:
          eraseMetadata:
            enabled: true
      partitions:
        - name: var_lib_nova_part
          size: 0
  fileSystems:
    ....
    - fileSystem: ext4
      partition: var_lib_nova_part
      mountPoint: '/var/lib/nova'
      mountOpts: 'rw,noatime,nodiratime,lazytime'
Not recommended configuration example for /var
spec:
  devices:
    ...
    - device:
        serialNumber: BTWA516305VE480FGN
        type: ssd
        wipeDevice:
          eraseMetadata:
            enabled: true
      partitions:
        - name: var_part
          size: 0
  fileSystems:
    ....
    - fileSystem: ext4
      partition: var_part
      mountPoint: '/var' # NOT RECOMMENDED
      mountOpts: 'rw,noatime,nodiratime,lazytime'
Cluster

This section describes the Cluster resource used the in Mirantis Container Cloud API that describes the cluster-level parameters.

For demonstration purposes, the Container Cloud Cluster custom resource (CR) is split into the following major sections:

Warning

The fields of the Cluster resource that are located under the status section including providerStatus are available for viewing only. They are automatically generated by the bare metal cloud provider and must not be modified using Container Cloud API.

metadata

The Container Cloud Cluster CR contains the following fields:

  • apiVersion

    API version of the object that is cluster.k8s.io/v1alpha1.

  • kind

    Object type that is Cluster.

The metadata object field of the Cluster resource contains the following fields:

  • name

    Name of a cluster. A managed cluster name is specified under the Cluster Name field in the Create Cluster wizard of the Container Cloud web UI. A management cluster name is configurable in the bootstrap script.

  • namespace

    Project in which the cluster object was created. The management cluster is always created in the default project. The managed cluster project equals to the selected project name.

  • labels

    Key-value pairs attached to the object:

    • kaas.mirantis.com/provider

      Provider type that is baremetal for the baremetal-based clusters.

    • kaas.mirantis.com/region

      Region name. The default region name for the management cluster is region-one.

      Note

      The kaas.mirantis.com/region label is removed from all Container Cloud objects in 2.26.0 (Cluster releases 17.1.0 and 16.1.0). Therefore, do not add the label starting these releases. On existing clusters updated to these releases, or if manually added, this label will be ignored by Container Cloud.

    Warning

    Labels and annotations that are not documented in this API Reference are generated automatically by Container Cloud. Do not modify them using the Container Cloud API.

Configuration example:

apiVersion: cluster.k8s.io/v1alpha1
kind: Cluster
metadata:
  name: demo
  namespace: test
  labels:
    kaas.mirantis.com/provider: baremetal
spec:providerSpec

The spec object field of the Cluster object represents the BaremetalClusterProviderSpec subresource that contains a complete description of the desired bare metal cluster state and all details to create the cluster-level resources. It also contains the fields required for LCM deployment and integration of the Container Cloud components.

The providerSpec object field is custom for each cloud provider and contains the following generic fields for the bare metal provider:

  • apiVersion

    API version of the object that is baremetal.k8s.io/v1alpha1

  • kind

    Object type that is BaremetalClusterProviderSpec

Configuration example:

spec:
  ...
  providerSpec:
    value:
      apiVersion: baremetal.k8s.io/v1alpha1
      kind: BaremetalClusterProviderSpec
spec:providerSpec common

The common providerSpec object field of the Cluster resource contains the following fields:

  • credentials

    Field reserved for other cloud providers, has an empty value. Disregard this field.

spec:providerSpec common

The common providerSpec object field of the Cluster resource contains the following fields:

  • credentials

    Field reserved for other cloud providers, has an empty value. Disregard this field.

  • release

    Name of the ClusterRelease object to install on a cluster

  • helmReleases

    List of enabled Helm releases from the Release object that run on a cluster

  • proxy

    Name of the Proxy object

  • tls

    TLS configuration for endpoints of a cluster

    • keycloak

      KeyCloak endpoint

      • tlsConfigRef

        Reference to the TLSConfig object

    • ui

      Web UI endpoint

      • tlsConfigRef

        Reference to the TLSConfig object

    For more details, see TLSConfig resource.

  • maintenance

    Maintenance mode of a cluster. Prepares a cluster for maintenance and enables the possibility to switch machines into maintenance mode.

  • containerRegistries

    List of the ContainerRegistries resources names.

  • ntpEnabled

    NTP server mode. Boolean, enabled by default.

    Since Container Cloud 2.23.0, you can optionally disable NTP to disable the management of chrony configuration by Container Cloud and use your own system for chrony management. Otherwise, configure the regional NTP server parameters to be applied to all machines of managed clusters.

    Before Container Cloud 2.23.0, you can optionally configure NTP parameters if servers from the Ubuntu NTP pool (*.ubuntu.pool.ntp.org) are accessible from the node where a management cluster is being provisioned. Otherwise, this configuration is mandatory.

    NTP configuration

    Configure the regional NTP server parameters to be applied to all machines of managed clusters.

    In the Cluster object, add the ntp:servers section with the list of required server names:

    spec:
      ...
      providerSpec:
        value:
          kaas:
          ...
          ntpEnabled: true
            regional:
              - helmReleases:
                - name: <providerName>-provider
                  values:
                    config:
                      lcm:
                        ...
                        ntp:
                          servers:
                          - 0.pool.ntp.org
                          ...
                provider: <providerName>
                ...
    

    To disable NTP:

    spec:
      ...
      providerSpec:
        value:
          ...
          ntpEnabled: false
          ...
    
  • audit Since 2.24.0 as TechPreview

    Optional. Auditing tools enabled on the cluster. Contains the auditd field that enables the Linux Audit daemon auditd to monitor activity of cluster processes and prevent potential malicious activity.

    Configuration for auditd

    In the Cluster object, add the auditd parameters:

    spec:
      providerSpec:
        value:
          audit:
            auditd:
              enabled: <bool>
              enabledAtBoot: <bool>
              backlogLimit: <int>
              maxLogFile: <int>
              maxLogFileAction: <string>
              maxLogFileKeep: <int>
              mayHaltSystem: <bool>
              presetRules: <string>
              customRules: <string>
              customRulesX32: <text>
              customRulesX64: <text>
    

    Configuration parameters for auditd:

    enabled

    Boolean, default - false. Enables the auditd role to install the auditd packages and configure rules. CIS rules: 4.1.1.1, 4.1.1.2.

    enabledAtBoot

    Boolean, default - false. Configures grub to audit processes that can be audited even if they start up prior to auditd startup. CIS rule: 4.1.1.3.

    backlogLimit

    Integer, default - none. Configures the backlog to hold records. If during boot audit=1 is configured, the backlog holds 64 records. If more than 64 records are created during boot, auditd records will be lost with a potential malicious activity being undetected. CIS rule: 4.1.1.4.

    maxLogFile

    Integer, default - none. Configures the maximum size of the audit log file. Once the log reaches the maximum size, it is rotated and a new log file is created. CIS rule: 4.1.2.1.

    maxLogFileAction

    String, default - none. Defines handling of the audit log file reaching the maximum file size. Allowed values:

    • keep_logs - rotate logs but never delete them

    • rotate - add a cron job to compress rotated log files and keep maximum 5 compressed files.

    • compress - compress log files and keep them under the /var/log/auditd/ directory. Requires auditd_max_log_file_keep to be enabled.

    CIS rule: 4.1.2.2.

    maxLogFileKeep

    Integer, default - 5. Defines the number of compressed log files to keep under the /var/log/auditd/ directory. Requires auditd_max_log_file_action=compress. CIS rules - none.

    mayHaltSystem

    Boolean, default - false. Halts the system when the audit logs are full. Applies the following configuration:

    • space_left_action = email

    • action_mail_acct = root

    • admin_space_left_action = halt

    CIS rule: 4.1.2.3.

    customRules

    String, default - none. Base64-encoded content of the 60-custom.rules file for any architecture. CIS rules - none.

    customRulesX32

    String, default - none. Base64-encoded content of the 60-custom.rules file for the i386 architecture. CIS rules - none.

    customRulesX64

    String, default - none. Base64-encoded content of the 60-custom.rules file for the x86_64 architecture. CIS rules - none.

    presetRules

    String, default - none. Comma-separated list of the following built-in preset rules:

    • access

    • actions

    • delete

    • docker

    • identity

    • immutable

    • logins

    • mac-policy

    • modules

    • mounts

    • perm-mod

    • privileged

    • scope

    • session

    • system-locale

    • time-change

    Since Container Cloud 2.28.0 (Cluster releases 17.3.0 and 16.3.0) in the Technology Preview scope, you can collect some of the preset rules indicated above as groups and use them in presetRules:

    • ubuntu-cis-rules - this group contains rules to comply with the Ubuntu CIS Benchmark recommendations, including the following CIS Ubuntu 20.04 v2.0.1 rules:

      • scope - 5.2.3.1

      • actions - same as 5.2.3.2

      • time-change - 5.2.3.4

      • system-locale - 5.2.3.5

      • privileged - 5.2.3.6

      • access - 5.2.3.7

      • identity - 5.2.3.8

      • perm-mod - 5.2.3.9

      • mounts - 5.2.3.10

      • session - 5.2.3.11

      • logins - 5.2.3.12

      • delete - 5.2.3.13

      • mac-policy - 5.2.3.14

      • modules - 5.2.3.19

    • docker-cis-rules - this group contains rules to comply with Docker CIS Benchmark recommendations, including the docker Docker CIS v1.6.0 rules 1.1.3 - 1.1.18.

    You can also use two additional keywords inside presetRules:

    • none - select no built-in rules.

    • all - select all built-in rules. When using this keyword, you can add the ! prefix to a rule name to exclude some rules. You can use the ! prefix for rules only if you add the all keyword as the first rule. Place a rule with the ! prefix only after the all keyword.

    Example configurations:

    • presetRules: none - disable all preset rules

    • presetRules: docker - enable only the docker rules

    • presetRules: access,actions,logins - enable only the access, actions, and logins rules

    • presetRules: ubuntu-cis-rules - enable all rules from the ubuntu-cis-rules group

    • presetRules: docker-cis-rules,actions - enable all rules from the docker-cis-rules group and the actions rule

    • presetRules: all - enable all preset rules

    • presetRules: all,!immutable,!sessions - enable all preset rules except immutable and sessions


    CIS controls
    4.1.3 (time-change)
    4.1.4 (identity)
    4.1.5 (system-locale)
    4.1.6 (mac-policy)
    4.1.7 (logins)
    4.1.8 (session)
    4.1.9 (perm-mod)
    4.1.10 (access)
    4.1.11 (privileged)
    4.1.12 (mounts)
    4.1.13 (delete)
    4.1.14 (scope)
    4.1.15 (actions)
    4.1.16 (modules)
    4.1.17 (immutable)
    Docker CIS controls
    1.1.4
    1.1.8
    1.1.10
    1.1.12
    1.1.13
    1.1.15
    1.1.16
    1.1.17
    1.1.18
    1.2.3
    1.2.4
    1.2.5
    1.2.6
    1.2.7
    1.2.10
    1.2.11
  • secureOverlay

    Optional. Technology Preview. Deprecated since Container Cloud 2.29.0 (Cluster releases 17.4.0 and 16.4.0). Available since Container Cloud 2.24.0 (Cluster release 14.0.0). Enables WireGuard for traffic encryption on the Kubernetes workloads network. Boolean. Disabled by default.

    Caution

    Before enabling WireGuard, ensure that the Calico MTU size is at least 60 bytes smaller than the interface MTU size of the workload network. IPv4 WireGuard uses a 60-byte header. For details, see Set the MTU size for Calico.

    Caution

    Changing this parameter on a running cluster causes a downtime that can vary depending on the cluster size.

    For more details about WireGuard, see Calico documentation: Encrypt in-cluster pod traffic.

Configuration example:

spec:
  ...
  providerSpec:
    value:
      credentials: ""
      publicKeys:
        - name: bootstrap-key
      release: ucp-5-7-0-3-3-3-tp11
      helmReleases:
        - name: metallb
          values:
            configInline:
              address-pools:
                - addresses:
                  - 10.0.0.101-10.0.0.120
                    name: default
                    protocol: layer2
        ...
        - name: stacklight
          ...
      tls:
        keycloak:
          certificate:
            name: keycloak
          hostname: container-cloud-auth.example.com
        ui:
          certificate:
            name: ui
          hostname: container-cloud-ui.example.com
      containerRegistries:
      - demoregistry
      ntpEnabled: false
      ...
spec:providerSpec configuration

This section represents the Container Cloud components that are enabled on a cluster. It contains the following fields:

  • management

    Configuration for the management cluster components:

    • enabled

      Management cluster enabled (true) or disabled (false).

    • helmReleases

      List of the management cluster Helm releases that will be installed on the cluster. A Helm release includes the name and values fields. The specified values will be merged with relevant Helm release values of the management cluster in the Release object.

  • regional

    List of regional cluster components for the provider:

    • provider

      Provider type that is baremetal.

    • helmReleases

      List of the regional Helm releases that will be installed on the cluster. A Helm release includes the name and values fields. The specified values will be merged with relevant regional Helm release values in the Release object.

  • release

    Name of the Container Cloud Release object.

Configuration example:

spec:
  ...
  providerSpec:
     value:
       kaas:
         management:
           enabled: true
           helmReleases:
             - name: kaas-ui
               values:
                 serviceConfig:
                   server: https://10.0.0.117
         regional:
           - helmReleases:
             - name: baremetal-provider
               values: {}
             provider: baremetal
           ...
         release: kaas-2-0-0
status:providerStatus common

Must not be modified using API

The common providerStatus object field of the Cluster resource contains the following fields:

  • apiVersion

    API version of the object that is baremetal.k8s.io/v1alpha1

  • kind

    Object type that is BaremetalClusterProviderStatus

  • loadBalancerHost

    Load balancer IP or host name of the Container Cloud cluster

  • apiServerCertificate

    Server certificate of Kubernetes API

  • ucpDashboard

    URL of the Mirantis Kubernetes Engine (MKE) Dashboard

  • maintenance

    Maintenance mode of a cluster. Prepares a cluster for maintenance and enables the possibility to switch machines into maintenance mode.

Configuration example:

status:
  providerStatus:
    apiVersion: baremetal.k8s.io/v1alpha1
    kind: BaremetalClusterProviderStatus
    loadBalancerHost: 10.0.0.100
    apiServerCertificate: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS…
    ucpDashboard: https://10.0.0.100:6443
status:providerStatus for cluster readiness

Must not be modified using API

The providerStatus object field of the Cluster resource that reflects the cluster readiness contains the following fields:

  • persistentVolumesProviderProvisioned

    Status of the persistent volumes provisioning. Prevents the Helm releases that require persistent volumes from being installed until some default StorageClass is added to the Cluster object.

  • helm

    Details about the deployed Helm releases:

    • ready

      Status of the deployed Helm releases. The true value indicates that all Helm releases are deployed successfully.

    • releases

      List of the enabled Helm releases that run on the Container Cloud cluster:

      • releaseStatuses

        List of the deployed Helm releases. The success: true field indicates that the release is deployed successfully.

      • stacklight

        Status of the StackLight deployment. Contains URLs of all StackLight components. The success: true field indicates that StackLight is deployed successfully.

  • nodes

    Details about the cluster nodes:

    • ready

      Number of nodes that completed the deployment or update.

    • requested

      Total number of nodes. If the number of ready nodes does not match the number of requested nodes, it means that a cluster is being currently deployed or updated.

  • notReadyObjects

    The list of the services, deployments, and statefulsets Kubernetes objects that are not in the Ready state yet. A service is not ready if its external address has not been provisioned yet. A deployment or statefulset is not ready if the number of ready replicas is not equal to the number of desired replicas. Both objects contain the name and namespace of the object and the number of ready and desired replicas (for controllers). If all objects are ready, the notReadyObjects list is empty.

Configuration example:

status:
  providerStatus:
    persistentVolumesProviderProvisioned: true
    helm:
      ready: true
      releases:
        releaseStatuses:
          iam:
            success: true
          ...
        stacklight:
          alerta:
            url: http://10.0.0.106
          alertmanager:
            url: http://10.0.0.107
          grafana:
            url: http://10.0.0.108
          kibana:
            url: http://10.0.0.109
          prometheus:
            url: http://10.0.0.110
          success: true
    nodes:
      ready: 3
      requested: 3
    notReadyObjects:
      services:
        - name: testservice
          namespace: default
      deployments:
        - name: baremetal-provider
          namespace: kaas
          replicas: 3
          readyReplicas: 2
      statefulsets: {}
status:providerStatus for Open ID Connect

Must not be modified using API

The oidc section of the providerStatus object field in the Cluster resource reflects the Open ID Connect configuration details. It contains the required details to obtain a token for a Container Cloud cluster and consists of the following fields:

  • certificate

    Base64-encoded OIDC certificate.

  • clientId

    Client ID for OIDC requests.

  • groupsClaim

    Name of an OIDC groups claim.

  • issuerUrl

    Issuer URL to obtain the representation of the realm.

  • ready

    OIDC status relevance. If true, the status corresponds to the LCMCluster OIDC configuration.

Configuration example:

status:
  providerStatus:
    oidc:
      certificate: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUREekNDQWZ...
      clientId: kaas
      groupsClaim: iam_roles
      issuerUrl: https://10.0.0.117/auth/realms/iam
      ready: true
status:providerStatus for cluster releases

Must not be modified using API

The releaseRefs section of the providerStatus object field in the Cluster resource provides the current Cluster release version as well as the one available for upgrade. It contains the following fields:

  • current

    Details of the currently installed Cluster release:

    • lcmType

      Type of the Cluster release (ucp).

    • name

      Name of the Cluster release resource.

    • version

      Version of the Cluster release.

    • unsupportedSinceKaaSVersion

      Indicates that a Container Cloud release newer than the current one exists and that it does not support the current Cluster release.

  • available

    List of the releases available for upgrade. Contains the name and version fields.

Configuration example:

status:
  providerStatus:
    releaseRefs:
      available:
        - name: ucp-5-5-0-3-4-0-dev
          version: 5.5.0+3.4.0-dev
      current:
        lcmType: ucp
        name: ucp-5-4-0-3-3-0-beta1
        version: 5.4.0+3.3.0-beta1
HostOSConfiguration

TechPreview since 2.26.0 (17.1.0 and 16.1.0)

Warning

For security reasons and to ensure safe and reliable cluster operability, test this configuration on a staging environment before applying it to production. For any questions, contact Mirantis support.

Caution

As long as the feature is still on the development stage, Mirantis highly recommends deleting all HostOSConfiguration objects, if any, before automatic upgrade of the management cluster to Container Cloud 2.27.0 (Cluster release 16.2.0). After the upgrade, you can recreate the required objects using the updated parameters.

This precautionary step prevents re-processing and re-applying of existing configuration, which is defined in HostOSConfiguration objects, during management cluster upgrade to 2.27.0. Such behavior is caused by changes in the HostOSConfiguration API introduced in 2.27.0.

This section describes the HostOSConfiguration custom resource (CR) used in the Container Cloud API. It contains all necessary information to introduce and load modules for further configuration of the host operating system of the related Machine object.

Note

This object must be created and managed on the management cluster.

For demonstration purposes, we split the Container Cloud HostOSConfiguration CR into the following sections:

HostOSConfiguration metadata
metadata

The Container Cloud HostOSConfiguration custom resource (CR) contains the following fields:

  • apiVersion

    Object API version that is kaas.mirantis.com/v1alpha1.

  • kind

    Object type that is HostOSConfiguration.

The metadata object field of the HostOSConfiguration resource contains the following fields:

  • name

    Object name.

  • namespace

    Project in which the HostOSConfiguration object is created.

Configuration example:

apiVersion: kaas.mirantis.com/v1alpha1
kind: HostOSConfiguration
metadata:
  name: host-os-configuration-sample
  namespace: default
HostOSConfiguration configuration

The spec object field contains configuration for a HostOSConfiguration object and has the following fields:

  • machineSelector

    Required for production deployments. A set of Machine objects to apply the HostOSConfiguration object to. Has the format of the Kubernetes label selector.

  • configs

    Required. List of configurations to apply to Machine objects defined in machineSelector. Each entry has the following fields:

    • module

      Required. Name of the module that refers to an existing module in one of the HostOSConfigurationModules objects.

    • moduleVersion

      Required. Version of the module in use in the SemVer format.

    • description

      Optional. Description and purpose of the configuration.

    • order

      Optional. Positive integer between 1 and 1024 that indicates the order of applying the module configuration. A configuration with the lowest order value is applied first. If the order field is not set:

      The configuration is applied in the order of appearance in the list after all configurations with the value are applied.

      The following rules apply to the ordering when comparing each pair of entries:

      1. Ordering by alphabet based on the module values unless they are equal.

      2. Ordering by version based on the moduleVersion values, with preference given to the lesser value.

    • values

      Optional if secretValues is set. Module configuration in the format of key-value pairs.

    • secretValues

      Optional if values is set. Reference to a Secret object that contains the configuration values for the module:

      • namespace

        Project name of the Secret object.

      • name

        Name of the Secret object.

      Note

      You can use both values and secretValues together. But if the values are duplicated, the secretValues data rewrites duplicated keys of the values data.

      Warning

      The referenced Secret object must contain only primitive non-nested values. Otherwise, the values will not be applied correctly.

    • phase

      Optional. LCM phase, in which a module configuration must be executed. The only supported and default value is reconfigure. Hence, you may omit this field.

  • order Removed in 2.27.0 (17.2.0 and 16.2.0)

    Optional. Positive integer between 1 and 1024 that indicates the order of applying HostOSConfiguration objects on newly added or newly assigned machines. An object with the lowest order value is applied first. If the value is not set, the object is applied last in the order.

    If no order field is set for all HostOSConfiguration objects, the objects are sorted by name.

    Note

    If a user changes the HostOSConfiguration object that was already applied on some machines, then only the changed items from the spec.configs section of the HostOSConfiguration object are applied to those machines, and the execution order applies only to the changed items.

    The configuration changes are applied on corresponding LCMMachine objects almost immediately after host-os-modules-controller verifies the changes.

Configuration example:

spec:
   machineSelector:
      matchLabels:
        label-name: "label-value"
   configs:
   - description: Brief description of the configuration
     module: container-cloud-provided-module-name
     moduleVersion: 1.0.0
     order: 1
     # the 'phase' field is provided for illustration purposes. it is redundant
     # because the only supported value is "reconfigure".
     phase: "reconfigure"
     values:
       foo: 1
       bar: "baz"
     secretValues:
       name: values-from-secret
       namespace: default
HostOSConfiguration status

The status field of the HostOSConfiguration object contains the current state of the object:

  • controllerUpdate Since 2.27.0 (17.2.0 and 16.2.0)

    Reserved. Indicates whether the status updates are initiated by host-os-modules-controller.

  • isValid Since 2.27.0 (17.2.0 and 16.2.0)

    Indicates whether all given configurations have been validated successfully and are ready to be applied on machines. An invalid object is discarded from processing.

  • specUpdatedAt Since 2.27.0 (17.2.0 and 16.2.0)

    Defines the time of the last change in the object spec observed by host-os-modules-controller.

  • containsDeprecatedModules Since 2.28.0 (17.3.0 and 16.3.0)

    Indicates whether the object uses one or several deprecated modules. Boolean.

  • machinesStates Since 2.27.0 (17.2.0 and 16.2.0)

    Specifies the per-machine state observed by baremetal-provider. The keys are machines names, and each entry has the following fields:

    • observedGeneration

      Read-only. Specifies the sequence number representing the quantity of changes in the object since its creation. For example, during object creation, the value is 1.

    • selected

      Indicates whether the machine satisfied the selector of the object. Non-selected machines are not defined in machinesStates. Boolean.

    • secretValuesChanged

      Indicates whether the secret values have been changed and the corresponding stateItems have to be updated. Boolean.

      The value is set to true by host-os-modules-controller if changes in the secret data are detected. The value is set to false by baremetal-provider after processing.

    • configStateItemsStatuses

      Specifies key-value pairs with statuses of StateItems that are applied to the machine. Each key contains the name and version of the configuration module. Each key value has the following format:

      • Key: name of a configuration StateItem

      • Value: simplified status of the configuration StateItem that has the following fields:

        • hash

          Value of the hash sum from the status of the corresponding StateItem in the LCMMachine object. Appears when the status switches to Success.

        • state

          Actual state of the corresponding StateItem from the LCMMachine object. Possible values: Not Started, Running, Success, Failed.

  • configs

    List of configurations statuses, indicating results of application of each configuration. Every entry has the following fields:

    • moduleName

      Existing module name from the list defined in the spec:modules section of the related HostOSConfigurationModules object.

    • moduleVersion

      Existing module version defined in the spec:modules section of the related HostOSConfigurationModules object.

    • modulesReference

      Name of the HostOSConfigurationModules object that contains the related module configuration.

    • modulePlaybook

      Name of the Ansible playbook of the module. The value is taken from the related HostOSConfigurationModules object where this module is defined.

    • moduleURL

      URL to the module package in the FQDN format. The value is taken from the related HostOSConfigurationModules object where this module is defined.

    • moduleHashsum

      Hash sum of the module. The value is taken from the related HostOSConfigurationModules object where this module is defined.

    • lastDesignatedConfiguration

      Removed in Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0). Key-value pairs representing the latest designated configuration data for modules. Each key corresponds to a machine name, while the associated value contains the configuration data encoded in the gzip+base64 format.

    • lastValidatedSpec

      Removed in Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0). Last validated module configuration encoded in the gzip+base64 format.

    • valuesValid

      Removed in Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0). Validation state of the configuration and secret values defined in the object spec against the module valuesValidationSchema. Always true when valuesValidationSchema is empty.

    • error

      Details of an error, if any, that occurs during the object processing by host-os-modules-controller.

    • secretObjectVersion

      Available since Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0). Resource version of the corresponding Secret object observed by host-os-modules-controller. Is present only if secretValues is set.

    • moduleDeprecatedBy

      Available since Container Cloud 2.28.0 (Cluster releases 17.3.0 and 16.3.0). List of modules that deprecate the currently configured module. Contains the name and version fields specifying one or more modules that deprecate the current module.

    • supportedDistributions

      Available since Container Cloud 2.28.0 (Cluster releases 17.3.0 and 16.3.0). List of operating system distributions that are supported by the current module. An empty list means support of any distribution by the current module.

HostOSConfiguration status example:

status:
  configs:
  - moduleHashsum: bc5fafd15666cb73379d2e63571a0de96fff96ac28e5bce603498cc1f34de299
    moduleName: module-name
    modulePlaybook: main.yaml
    moduleURL: <url-to-module-archive.tgz>
    moduleVersion: 1.1.0
    modulesReference: mcc-modules
    moduleDeprecatedBy:
    - name: another-module-name
      version: 1.0.0
  - moduleHashsum: 53ec71760dd6c00c6ca668f961b94d4c162eef520a1f6cb7346a3289ac5d24cd
    moduleName: another-module-name
    modulePlaybook: main.yaml
    moduleURL: <url-to-another-module-archive.tgz>
    moduleVersion: 1.1.0
    modulesReference: mcc-modules
    secretObjectVersion: "14234794"
  containsDeprecatedModules: true
  isValid: true
  machinesStates:
    default/master-0:
      configStateItemsStatuses:
        # moduleName-moduleVersion
        module-name-1.1.0:
          # corresponding state item
          host-os-download-<object-name>-module-name-1.1.0-reconfigure:
            hash: 0e5c4a849153d3278846a8ed681f4822fb721f6d005021c4509e7126164f428d
            state: Success
          host-os-<object-name>-module-name-1.1.0-reconfigure:
            state: Not Started
        another-module-name-1.1.0:
          host-os-download-<object-name>-another-module-name-1.1.0-reconfigure:
            state: Not Started
          host-os-<object-name>-another-module-name-1.1.0-reconfigure:
            state: Not Started
      observedGeneration: 1
      selected: true
  updatedAt: "2024-04-23T14:10:28Z"
HostOSConfigurationModules

TechPreview since 2.26.0 (17.1.0 and 16.1.0)

Warning

For security reasons and to ensure safe and reliable cluster operability, test this configuration on a staging environment before applying it to production. For any questions, contact Mirantis support.

This section describes the HostOSConfigurationModules custom resource (CR) used in the Container Cloud API. It contains all necessary information to introduce and load modules for further configuration of the host operating system of the related Machine object. For description of module format, schemas, and rules, see Format and structure of a module package.

Note

This object must be created and managed on the management cluster.

For demonstration purposes, we split the Container Cloud HostOSConfigurationModules CR into the following sections:

HostOSConfigurationModules metadata
metadata

The Container Cloud HostOSConfigurationModules custom resource (CR) contains the following fields:

  • apiVersion

    Object API version that is kaas.mirantis.com/v1alpha1.

  • kind

    Object type that is HostOSConfigurationModules.

The metadata object field of the HostOSConfigurationModules resource contains the following fields:

  • name

    Object name.

Configuration example:

apiVersion: kaas.mirantis.com/v1alpha1
kind: HostOSConfigurationModules
metadata:
  name: host-os-configuration-modules-sample
HostOSConfigurationModules configuration

The spec object field contains configuration for a HostOSConfigurationModules object and has the following fields:

  • modules

    List of available modules to use as a configuration. Each entry has the following fields:

    • name

      Required. Module name that must equal the corresponding custom module name defined in the metadata section of the corresponding module. For reference, see MOSK documentation: Day-2 operations - Metadata file format.

    • url

      Required for custom modules. URL to the archive containing the module package in the FQDN format. If omitted, the module is considered as the one provided and validated by Container Cloud.

    • version

      Required. Module version in SemVer format that must equal the corresponding custom module version defined in the metadata section of the corresponding module. For reference, see MOSK documentation: Day-2 operations - Metadata file format.

    • sha256sum

      Required. Hash sum computed using the SHA-256 algorithm. The hash sum is automatically validated upon fetching the module package, the module does not load if the hash sum is invalid.

    • deprecates Since 2.28.0 (17.3.0 and 16.3.0)

      Reserved. List of modules that will be deprecated by the module. This field is overriden by the same field, if any, of the module metadata section.

      Contains the name and version fields specifying one or more modules to be deprecated. If name is omitted, it inherits the name of the current module.

Configuration example:

spec:
    modules:
    - name: mirantis-provided-module-name
      sha256sum: ff3c426d5a2663b544acea74e583d91cc2e292913fc8ac464c7d52a3182ec146
      version: 1.0.0
    - name: custom-module-name
      url: https://fully.qualified.domain.name/to/module/archive/module-name-1.0.0.tgz
      sha256sum: 258ccafac1570de7b7829bde108fa9ee71b469358dbbdd0215a081f8acbb63ba
      version: 1.0.0
HostOSConfigurationModules status

The status field of the HostOSConfigurationModules object contains the current state of the object:

  • modules

    List of module statuses, indicating the loading results of each module. Each entry has the following fields:

    • name

      Name of the loaded module.

    • version

      Version of the loaded module.

    • url

      URL to the archive containing the loaded module package in the FQDN format.

    • docURL

      URL to the loaded module documentation if it was initially present in the module package.

    • description

      Description of the loaded module if it was initially present in the module package.

    • sha256sum

      Actual SHA-256 hash sum of the loaded module.

    • valuesValidationSchema

      JSON schema used against the module configuration values if it was initially present in the module package. The value is encoded in the gzip+base64 format.

    • state

      Actual availability state of the module. Possible values are: available or error.

    • error

      Error, if any, that occurred during the module fetching and verification.

    • playbookName

      Name of the module package playbook.

    • deprecates Since 2.28.0 (17.3.0 and 16.3.0)

      List of modules that are deprecated by the module. Contains the name and version fields specifying one or more modules deprecated by the current module.

    • deprecatedBy Since 2.28.0 (17.3.0 and 16.3.0)

      List of modules that deprecate the current module. Contains the name and version fields specifying one or more modules that deprecate the current module.

    • supportedDistributions Since 2.28.0 (17.3.0 and 16.3.0)

      List of operating system distributions that are supported by the current module. An empty list means support of any distribution by the current module.

HostOSConfigurationModules status example:

status:
  modules:
  - description: Brief description of the module
    docURL: https://docs.mirantis.com
    name: mirantis-provided-module-name
    playbookName: directory/main.yaml
    sha256sum: ff3c426d5a2663b544acea74e583d91cc2e292913fc8ac464c7d52a3182ec146
    state: available
    url: https://example.mirantis.com/path/to/module-name-1.0.0.tgz
    valuesValidationSchema: <gzip+base64 encoded data>
    version: 1.0.0
    deprecates:
    - name: custom-module-name
      version: 1.0.0
  - description: Brief description of the module
    docURL: https://example.documentation.page/module-name
    name: custom-module-name
    playbookName: directory/main.yaml
    sha256sum: 258ccafac1570de7b7829bde108fa9ee71b469358dbbdd0215a081f8acbb63ba
    state: available
    url: https://fully.qualified.domain.name/to/module/archive/module-name-1.0.0.tgz
    version: 1.0.0
    deprecatedBy:
    - name: mirantis-provided-module-name
      version: 1.0.0
    supportedDistributions:
    - ubuntu/jammy
IPaddr

This section describes the IPaddr resource used in Mirantis Container Cloud API. The IPAddr object describes an IP address and contains all information about the associated MAC address.

For demonstration purposes, the Container Cloud IPaddr custom resource (CR) is split into the following major sections:

IPaddr metadata

The Container Cloud IPaddr CR contains the following fields:

  • apiVersion

    API version of the object that is ipam.mirantis.com/v1alpha1

  • kind

    Object type that is IPaddr

  • metadata

    The metadata field contains the following subfields:

    • name

      Name of the IPaddr object in the auto-XX-XX-XX-XX-XX-XX format where XX-XX-XX-XX-XX-XX is the associated MAC address

    • namespace

      Project in which the IPaddr object was created

    • labels

      Key-value pairs that are attached to the object:

      • ipam/IP

        IPv4 address

      • ipam/IpamHostID

        Unique ID of the associated IpamHost object

      • ipam/MAC

        MAC address

      • ipam/SubnetID

        Unique ID of the Subnet object

      • ipam/UID

        Unique ID of the IPAddr object

      Warning

      Labels and annotations that are not documented in this API Reference are generated automatically by Container Cloud. Do not modify them using the Container Cloud API.

Configuration example:

apiVersion: ipam.mirantis.com/v1alpha1
kind: IPaddr
metadata:
  name: auto-0c-c4-7a-a8-b8-18
  namespace: default
  labels:
    ipam/IP: 172.16.48.201
    ipam/IpamHostID: 848b59cf-f804-11ea-88c8-0242c0a85b02
    ipam/MAC: 0C-C4-7A-A8-B8-18
    ipam/SubnetID: 572b38de-f803-11ea-88c8-0242c0a85b02
    ipam/UID: 84925cac-f804-11ea-88c8-0242c0a85b02
IPAddr spec

The spec object field of the IPAddr resource contains the associated MAC address and the reference to the Subnet object:

  • mac

    MAC address in the XX:XX:XX:XX:XX:XX format

  • subnetRef

    Reference to the Subnet resource in the <subnetProjectName>/<subnetName> format

Configuration example:

spec:
  mac: 0C:C4:7A:A8:B8:18
  subnetRef: default/kaas-mgmt
IPAddr status

The status object field of the IPAddr resource reflects the actual state of the IPAddr object. In contains the following fields:

  • address

    IP address.

  • cidr

    IPv4 CIDR for the Subnet.

  • gateway

    Gateway address for the Subnet.

  • mac

    MAC address in the XX:XX:XX:XX:XX:XX format.

  • nameservers

    List of the IP addresses of name servers of the Subnet. Each element of the list is a single address, for example, 172.18.176.6.

  • state Since 2.23.0

    Message that reflects the current status of the resource. The list of possible values includes the following:

    • OK - object is operational.

    • ERR - object is non-operational. This status has a detailed description in the messages list.

    • TERM - object was deleted and is terminating.

  • messages Since 2.23.0

    List of error or warning messages if the object state is ERR.

  • objCreated

    Date, time, and IPAM version of the resource creation.

  • objStatusUpdated

    Date, time, and IPAM version of the last update of the status field in the resource.

  • objUpdated

    Date, time, and IPAM version of the last resource update.

  • phase

    Deprecated since Container Cloud 2.23.0 and will be removed in one of the following releases in favor of state. Possible values: Active, Failed, or Terminating.

Configuration example:

status:
  address: 172.16.48.201
  cidr: 172.16.48.201/24
  gateway: 172.16.48.1
  objCreated: 2021-10-21T19:09:32Z  by  v5.1.0-20210930-121522-f5b2af8
  objStatusUpdated: 2021-10-21T19:14:18.748114886Z  by  v5.1.0-20210930-121522-f5b2af8
  objUpdated: 2021-10-21T19:09:32.606968024Z  by  v5.1.0-20210930-121522-f5b2af8
  mac: 0C:C4:7A:A8:B8:18
  nameservers:
  - 172.18.176.6
  state: OK
  phase: Active
IpamHost

This section describes the IpamHost resource used in Mirantis Container Cloud API. The kaas-ipam controller monitors the current state of the bare metal Machine, verifies if BareMetalHost is successfully created and inspection is completed. Then the kaas-ipam controller fetches the information about the network interface configuration, creates the IpamHost object, and requests the IP addresses.

The IpamHost object is created for each Machine and contains all configuration of the host network interfaces and IP address. It also contains the information about associated BareMetalHost, Machine, and MAC addresses.

Note

Before update of the management cluster to Container Cloud 2.29.0 (Cluster release 16.4.0), instead of BareMetalHostInventory, use the BareMetalHost object. For details, see BareMetalHost.

Caution

While the Cluster release of the management cluster is 16.4.0, BareMetalHostInventory operations are allowed to m:kaas@management-admin only. Once the management cluster is updated to the Cluster release 16.4.1 (or later), this limitation will be lifted.

For demonstration purposes, the Container Cloud IpamHost custom resource (CR) is split into the following major sections:

IpamHost metadata

The Container Cloud IpamHost CR contains the following fields:

  • apiVersion

    API version of the object that is ipam.mirantis.com/v1alpha1

  • kind

    Object type that is IpamHost

  • metadata

    The metadata field contains the following subfields:

    • name

      Name of the IpamHost object

    • namespace

      Project in which the IpamHost object has been created

    • labels

      Key-value pairs that are attached to the object:

      • cluster.sigs.k8s.io/cluster-name

        References the Cluster object name that IpamHost is assigned to

      • ipam/BMHostID

        Unique ID of the associated BareMetalHost object

      • ipam/MAC-XX-XX-XX-XX-XX-XX: "1"

        Number of NICs of the host that the corresponding MAC address is assigned to

      • ipam/MachineID

        Unique ID of the associated Machine object

      • ipam/UID

        Unique ID of the IpamHost object

      Warning

      Labels and annotations that are not documented in this API Reference are generated automatically by Container Cloud. Do not modify them using the Container Cloud API.

Configuration example:

apiVersion: ipam.mirantis.com/v1alpha1
kind: IpamHost
metadata:
  name: master-0
  namespace: default
  labels:
    cluster.sigs.k8s.io/cluster-name: kaas-mgmt
    ipam/BMHostID: 57250885-f803-11ea-88c8-0242c0a85b02
    ipam/MAC-0C-C4-7A-1E-A9-5C: "1"
    ipam/MAC-0C-C4-7A-1E-A9-5D: "1"
    ipam/MachineID: 573386ab-f803-11ea-88c8-0242c0a85b02
    ipam/UID: 834a2fc0-f804-11ea-88c8-0242c0a85b02
IpamHost configuration

The spec field of the IpamHost resource describes the desired state of the object. It contains the following fields:

  • nicMACmap

    Represents an unordered list of all NICs of the host obtained during the bare metal host inspection. Each NIC entry contains such fields as name, mac, ip, and so on. The primary field defines which NIC was used for PXE booting. Only one NIC can be primary. The IP address is not configurable and is provided only for debug purposes.

  • l2TemplateSelector

    If specified, contains the name (first priority) or label of the L2 template that will be applied during a machine creation. The l2TemplateSelector field is copied from the Machine providerSpec object to the IpamHost object only once, during a machine creation. To modify l2TemplateSelector after creation of a Machine CR, edit the IpamHost object.

  • netconfigUpdateMode TechPreview

    Update mode of network configuration. Possible values:

    • MANUAL

      Default, recommended. An operator manually applies new network configuration.

    • AUTO-UNSAFE

      Unsafe, not recommended. If new network configuration is rendered by kaas-ipam successfully, it is applied automatically with no manual approval.

    • MANUAL-GRACEPERIOD

      Initial value set during the IpamHost object creation. If new network configuration is rendered by kaas-ipam successfully, it is applied automatically with no manual approval. This value is implemented for automatic changes in the IpamHost object during the host provisioning and deployment. The value is changed automatically to MANUAL in three hours after the IpamHost object creation.

    Caution

    For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

  • netconfigUpdateAllow TechPreview

    Manual approval of network changes. Possible values: true or false. Set to true to approve the Netplan configuration file candidate (stored in netconfigCandidate) and copy its contents to the effective Netplan configuration file list (stored in netconfigFiles). After that, its value is automatically switched back to false.

    Note

    This value has effect only if netconfigUpdateMode is set to MANUAL.

    Set to true only if status.netconfigCandidateState of network configuration candidate is OK.

    Caution

    The following fields of the ipamHost status are renamed since Container Cloud 2.22.0 in the scope of the L2Template and IpamHost objects refactoring:

    • netconfigV2 to netconfigCandidate

    • netconfigV2state to netconfigCandidateState

    • netconfigFilesState to netconfigFilesStates (per file)

    No user actions are required after renaming.

    The format of netconfigFilesState changed after renaming. The netconfigFilesStates field contains a dictionary of statuses of network configuration files stored in netconfigFiles. The dictionary contains the keys that are file paths and values that have the same meaning for each file that netconfigFilesState had:

    • For a successfully rendered configuration file: OK: <timestamp> <sha256-hash-of-rendered-file>, where a timestamp is in the RFC 3339 format.

    • For a failed rendering: ERR: <error-message>.

    Caution

    For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

Configuration example:

spec:
  nicMACmap:
  - mac: 0c:c4:7a:1e:a9:5c
    name: ens11f0
  - ip: 172.16.48.157
    mac: 0c:c4:7a:1e:a9:5d
    name: ens11f1
    primary: true
  l2TemplateSelector:
    label:xxx
  netconfigUpdateMode: manual
  netconfigUpdateAllow: false
IpamHost status

Caution

The following fields of the ipamHost status are renamed since Container Cloud 2.22.0 in the scope of the L2Template and IpamHost objects refactoring:

  • netconfigV2 to netconfigCandidate

  • netconfigV2state to netconfigCandidateState

  • netconfigFilesState to netconfigFilesStates (per file)

No user actions are required after renaming.

The format of netconfigFilesState changed after renaming. The netconfigFilesStates field contains a dictionary of statuses of network configuration files stored in netconfigFiles. The dictionary contains the keys that are file paths and values that have the same meaning for each file that netconfigFilesState had:

  • For a successfully rendered configuration file: OK: <timestamp> <sha256-hash-of-rendered-file>, where a timestamp is in the RFC 3339 format.

  • For a failed rendering: ERR: <error-message>.

The status field of the IpamHost resource describes the observed state of the object. It contains the following fields:

  • netconfigCandidate

    Candidate of the Netplan configuration file in human readable format that is rendered using the corresponding L2Template. This field contains valid data if l2RenderResult and netconfigCandidateState retain the OK result.

  • l2RenderResult Deprecated

    Status of a rendered Netplan configuration candidate stored in netconfigCandidate. Possible values:

    • For a successful L2 template rendering: OK: timestamp sha256-hash-of-rendered-netplan, where timestamp is in the RFC 3339 format

    • For a failed rendering: ERR: <error-message>

    This field is deprecated and will be removed in one of the following releases. Use netconfigCandidateState instead.

  • netconfigCandidateState TechPreview

    Status of a rendered Netplan configuration candidate stored in netconfigCandidate. Possible values:

    • For a successful L2 template rendering: OK: timestamp sha256-hash-of-rendered-netplan, where timestamp is in the RFC 3339 format

    • For a failed rendering: ERR: <error-message>

    Caution

    For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

  • netconfigFiles

    List of Netplan configuration files rendered using the corresponding L2Template. It is used to configure host networking during bare metal host provisioning and during Kubernetes node deployment. For details, refer to MOSK documentation: Workflow of the netplan configuration using an L2 template.

    Its contents are changed only if rendering of Netplan configuration was successful. So, it always retains the last successfully rendered Netplan configuration. To apply changes in contents, the Infrastructure Operator approval is required. For details, see Modify network configuration on an existing machine.

    Every item in this list contains:

    • content

      The base64-encoded Netplan configuration file that was rendered using the corresponding L2Template.

    • path

      The file path for the Netplan configuration file on the target host.

  • netconfigFilesStates

    Status of Netplan configuration files stored in netconfigFiles. Possible values are:

    • For a successful L2 template rendering: OK: timestamp sha256-hash-of-rendered-netplan, where timestamp is in the RFC 3339 format

    • For a failed rendering: ERR: <error-message>

  • serviceMap

    Dictionary of services and their endpoints (IP address and optional interface name) that have the ipam/SVC-<serviceName> label. These addresses are added to the ServiceMap dictionary during rendering of an L2 template for a given IpamHost. For details, see Service labels and their life cycle.

  • state Since 2.23.0

    Message that reflects the current status of the resource. The list of possible values includes the following:

    • OK - object is operational.

    • ERR - object is non-operational. This status has a detailed description in the messages list.

    • TERM - object was deleted and is terminating.

  • messages Since 2.23.0

    List of error or warning messages if the object state is ERR.

  • objCreated

    Date, time, and IPAM version of the resource creation.

  • objStatusUpdated

    Date, time, and IPAM version of the last update of the status field in the resource.

  • objUpdated

    Date, time, and IPAM version of the last resource update.

Configuration example:

status:
  l2RenderResult: OK
  l2TemplateRef: namespace_name/l2-template-name/1/2589/88865f94-04f0-4226-886b-2640af95a8ab
  netconfigFiles:
    - content: ...<base64-encoded Netplan configuration file>...
      path: /etc/netplan/60-kaas-lcm-netplan.yaml
  netconfigFilesStates: /etc/netplan/60-kaas-lcm-netplan.yaml: 'OK: 2023-01-23T09:27:22.71802Z ece7b73808999b540e32ca1720c6b7a6e54c544cc82fa40d7f6b2beadeca0f53'
  netconfigCandidate:
    ...
    <Netplan configuration file in plain text, rendered from L2Template>
    ...
  netconfigCandidateState: OK: 2022-06-08T03:18:08.49590Z a4a128bc6069638a37e604f05a5f8345cf6b40e62bce8a96350b5a29bc8bccde\
  serviceMap:
    ipam/SVC-ceph-cluster:
      - ifName: ceph-br2
        ipAddress: 10.0.10.11
      - ifName: ceph-br1
        ipAddress: 10.0.12.22
    ipam/SVC-ceph-public:
      - ifName: ceph-public
        ipAddress: 10.1.1.15
    ipam/SVC-k8s-lcm:
      - ifName: k8s-lcm
        ipAddress: 10.0.1.52
  phase: Active
  state: OK
  objCreated: 2021-10-21T19:09:32Z  by  v5.1.0-20210930-121522-f5b2af8
  objStatusUpdated: 2021-10-21T19:14:18.748114886Z  by  v5.1.0-20210930-121522-f5b2af8
  objUpdated: 2021-10-21T19:09:32.606968024Z  by  v5.1.0-20210930-121522-f5b2af8
L2Template

This section describes the L2Template resource used in Mirantis Container Cloud API.

By default, Container Cloud configures a single interface on cluster nodes, leaving all other physical interfaces intact. With L2Template, you can create advanced host networking configurations for your clusters. For example, you can create bond interfaces on top of physical interfaces on the host.

For demonstration purposes, the Container Cloud L2Template custom resource (CR) is split into the following major sections:

L2Template metadata

The Container Cloud L2Template CR contains the following fields:

  • apiVersion

    API version of the object that is ipam.mirantis.com/v1alpha1.

  • kind

    Object type that is L2Template.

  • metadata

    The metadata field contains the following subfields:

    • name

      Name of the L2Template object.

    • namespace

      Project in which the L2Template object was created.

    • labels

      Key-value pairs that are attached to the object:

      Caution

      All ipam/* labels, except ipam/DefaultForCluster, are set automatically and must not be configured manually.

      • cluster.sigs.k8s.io/cluster-name

        Mandatory for newly created L2Template since Container Cloud 2.25.0 (Cluster releases 17.0.0 and 16.0.0). References the Cluster object name that this template is applied to.

        The process of selecting the L2Template object for a specific cluster is as follows:

        1. The kaas-ipam controller monitors the L2Template objects with the cluster.sigs.k8s.io/cluster-name: <clusterName> label.

        2. The L2Template object with the cluster.sigs.k8s.io/cluster-name: <clusterName> label is assigned to a cluster with Name: <clusterName>, if available.

      • ipam/PreInstalledL2Template: "1"

        Is automatically added during a management cluster deployment. Indicates that the current L2Template object was preinstalled. Represents L2 templates that are automatically copied to a project once it is created. Once the L2 templates are copied, the ipam/PreInstalledL2Template label is removed.

        Note

        Preinstalled L2 templates are removed in Container Cloud 2.26.0 (Cluster releases 17.1.0 and 16.1.0) along with the ipam/PreInstalledL2Template label. During cluster update to the mentioned releases, existing preinstalled templates are automatically removed.

      • ipam/DefaultForCluster

        This label is unique per cluster. When you use several L2 templates per cluster, only the first template is automatically labeled as the default one. All consequent templates must be referenced in the machines configuration files using L2templateSelector. You can manually configure this label if required.

      • ipam/UID

        Unique ID of an object.

      • kaas.mirantis.com/provider

        Provider type.

      • kaas.mirantis.com/region

        Region name.

        Note

        The kaas.mirantis.com/region label is removed from all Container Cloud objects in 2.26.0 (Cluster releases 17.1.0 and 16.1.0). Therefore, do not add the label starting these releases. On existing clusters updated to these releases, or if manually added, this label will be ignored by Container Cloud.

      Warning

      Labels and annotations that are not documented in this API Reference are generated automatically by Container Cloud. Do not modify them using the Container Cloud API.

Configuration example:

apiVersion: ipam.mirantis.com/v1alpha1
kind: L2Template
metadata:
  name: l2template-test
  namespace: default
  labels:
    ipam/DefaultForCluster: "1"
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
L2Template configuration

L2 template requirements

  • An L2 template must have the same project (Kubernetes namespace) as the referenced cluster.

  • A cluster can be associated with many L2 templates. Only one of them can have the ipam/DefaultForCluster label. Every L2 template that does not have the ipam/DefaultForCluster label can be later assigned to a particular machine using l2TemplateSelector.

  • The following rules apply to the default L2 template of a namespace:

    • Since Container Cloud 2.25.0 (Cluster releases 17.0.0 and 16.0.0), creation of the default L2 template for a namespace is disabled. On existing clusters, the Spec.clusterRef: default parameter of such an L2 template is automatically removed during the migration process. Subsequently, this parameter is not substituted with the cluster.sigs.k8s.io/cluster-name label, ensuring the application of the L2 template across the entire Kubernetes namespace. Therefore, you can continue using existing default namespaced L2 templates.

    • Before Container Cloud 2.25.0 (Cluster releases 15.x, 14.x, or earlier), the default L2Template object of a namespace must have the Spec.clusterRef: default parameter that is deprecated since 2.25.0.

The spec field of the L2Template resource describes the desired state of the object. It contains the following fields:

  • ifMapping

    List of interface names for the template. The interface mapping is defined globally for all Machine objects linked to the template but can be overridden at the host level, if required, by editing the IpamHost object for a particular host. The ifMapping parameter is mutually exclusive with autoIfMappingPrio.

  • autoIfMappingPrio

    List of prefixes, such as eno, ens, and so on, to match the interfaces to automatically create a list for the template. If you are not aware of any specific ordering of interfaces on the nodes, use the default ordering from Predictable Network Interfaces Names specification for systemd.

    You can also override the default NIC list per host using the IfMappingOverride parameter of the corresponding IpamHost. The provision value corresponds to the network interface that was used to provision a node. Usually, it is the first NIC found on a particular node. It is defined explicitly to ensure that this interface will not be reconfigured accidentally.

    The autoIfMappingPrio parameter is mutually exclusive with ifMapping.

  • l3Layout

    Subnets to be used in the npTemplate section. The field contains a list of subnet definitions with parameters used by template macros.

    • subnetName

      Defines the alias name of the subnet that can be used to reference this subnet from the template macros. This parameter is mandatory for every entry in the l3Layout list.

    • subnetPool Unsupported since 2.28.0 (17.3.0 and 16.3.0)

      Optional. Default: none. Defines a name of the parent SubnetPool object that will be used to create a Subnet object with a given subnetName and scope. For deprecation details, see MOSK Deprecation Notes: SubnetPool resource management.

      If a corresponding Subnet object already exists, nothing will be created and the existing object will be used. If no SubnetPool is provided, no new Subnet object will be created.

    • scope

      Logical scope of the Subnet object with a corresponding subnetName. Possible values:

      • global - the Subnet object is accessible globally, for any Container Cloud project and cluster, for example, the PXE subnet.

      • namespace - the Subnet object is accessible within the same project where the L2 template is defined.

      • cluster - Unsupported since Container Cloud 2.28.0 (Cluster releases 17.3.0 and 16.3.0). The Subnet object uses the namespace where the referenced cluster is located. A subnet is only accessible to the cluster that L2Template.metadata.labels:cluster.sigs.k8s.io/cluster-name (mandatory since MOSK 23.3) or L2Template.spec.clusterRef (deprecated in MOSK 23.3) refers to. The Subnet objects with the cluster scope will be created for every new cluster.

      Note

      Every subnet referenced in an L2 template can have either a global or namespaced scope. In the latter case, the subnet must exist in the same project where the corresponding cluster and L2 template are located.

    • labelSelector

      Contains a dictionary of labels and their respective values that will be used to find the matching Subnet object. If the labelSelector field is omitted, the Subnet object will be selected by name, specified by the subnetName parameter.

      Caution

      The labels and their values in this section must match the ones added for the corresponding Subnet object.

    Caution

    The l3Layout section is mandatory for each L2Template custom resource.

  • npTemplate

    A netplan-compatible configuration with special lookup functions that defines the networking settings for the cluster hosts, where physical NIC names and details are parameterized. This configuration will be processed using Go templates. Instead of specifying IP and MAC addresses, interface names, and other network details specific to a particular host, the template supports use of special lookup functions. These lookup functions, such as nic, mac, ip, and so on, return host-specific network information when the template is rendered for a particular host.

    Caution

    All rules and restrictions of the netplan configuration also apply to L2 templates. For details, see official netplan documentation.

    Caution

    Mirantis strongly recommends following the below conventions on network interface naming:

    • A physical NIC name set by an L2 template must not exceed 15 symbols. Otherwise, an L2 template creation fails. This limit is set by the Linux kernel.

    • Names of virtual network interfaces such as VLANs, bridges, bonds, veth, and so on must not exceed 15 symbols.

    Mirantis recommends setting interfaces names that do not exceed 13 symbols for both physical and virtual interfaces to avoid corner cases and issues in netplan rendering.

    The following table describes the main lookup functions for an L2 template.

    Lookup function

    Description

    {{nic N}}

    Name of a NIC number N. NIC numbers correspond to the interface mapping list. This macro can be used as a key for the elements of the ethernets map, or as the value of the name and set-name parameters of a NIC. It is also used to reference the physical NIC from definitions of virtual interfaces (vlan, bridge).

    {{mac N}}

    MAC address of a NIC number N registered during a host hardware inspection.

    {{ip “N:subnet-a”}}

    IP address and mask for a NIC number N. The address will be auto-allocated from the given subnet if the address does not exist yet.

    {{ip “br0:subnet-x”}}

    IP address and mask for a virtual interface, “br0” in this example. The address will be auto-allocated from the given subnet if the address does not exist yet.

    For virtual interfaces names, an IP address placeholder must contain a human-readable ID that is unique within the L2 template and must have the following format:

    {{ip "<shortUniqueHumanReadableID>:<subnetNameFromL3Layout>"}}

    The <shortUniqueHumanReadableID> is made equal to a virtual interface name throughout this document and Container Cloud bootstrap templates.

    {{cidr_from_subnet “subnet-a”}}

    IPv4 CIDR address from the given subnet.

    {{gateway_from_subnet “subnet-a”}}

    IPv4 default gateway address from the given subnet.

    {{nameservers_from_subnet “subnet-a”}}

    List of the IP addresses of name servers from the given subnet.

    {{cluster_api_lb_ip}}

    Technology Preview since Container Cloud 2.24.4 (Cluster releases 15.0.3 and 14.0.3). IP address for a cluster API load balancer.

  • clusterRef

    Caution

    Deprecated since Container Cloud 2.25.0 (Cluster releases 17.0.0 and 16.0.0) in favor of the mandatory cluster.sigs.k8s.io/cluster-name label. Will be removed in one of the following releases.

    On existing clusters, this parameter is automatically migrated to the cluster.sigs.k8s.io/cluster-name label since 2.25.0.

    If an existing cluster has clusterRef: default set, the migration process involves removing this parameter. Subsequently, it is not substituted with the cluster.sigs.k8s.io/cluster-name label, ensuring the application of the L2 template across the entire Kubernetes namespace.

    The Cluster object name that this template is applied to. The default value is used to apply the given template to all clusters within a particular project, unless an L2 template that references a specific cluster name exists. The clusterRef field has priority over the cluster.sigs.k8s.io/cluster-name label:

    • When clusterRef is set to a non-default value, the cluster.sigs.k8s.io/cluster-name label will be added or updated with that value.

    • When clusterRef is set to default, the cluster.sigs.k8s.io/cluster-name label will be absent or removed.

Configuration example:

spec:
  autoIfMappingPrio:
  - provision
  - eno
  - ens
  - enp
  l3Layout:
    - subnetName: kaas-mgmt
      scope:      global
      labelSelector:
        kaas-mgmt-subnet: ""
    - subnetName: demo-pods
      scope:      namespace
    - subnetName: demo-ext
      scope:      namespace
    - subnetName: demo-ceph-cluster
      scope:      namespace
    - subnetName: demo-ceph-replication
      scope:      namespace
  npTemplate: |
    version: 2
    ethernets:
      {{nic 1}}:
        dhcp4: false
        dhcp6: false
        addresses:
          - {{ip "1:kaas-mgmt"}}
        gateway4: {{gateway_from_subnet "kaas-mgmt"}}
        nameservers:
          addresses: {{nameservers_from_subnet "kaas-mgmt"}}
        match:
          macaddress: {{mac 1}}
        set-name: {{nic 1}}
L2Template status

The status field of the L2Template resource reflects the actual state of the L2Template object and contains the following fields:

  • state Since 2.23.0

    Message that reflects the current status of the resource. The list of possible values includes the following:

    • OK - object is operational.

    • ERR - object is non-operational. This status has a detailed description in the messages list.

    • TERM - object was deleted and is terminating.

  • messages Since 2.23.0

    List of error or warning messages if the object state is ERR.

  • objCreated

    Date, time, and IPAM version of the resource creation.

  • objStatusUpdated

    Date, time, and IPAM version of the last update of the status field in the resource.

  • objUpdated

    Date, time, and IPAM version of the last resource update.

  • phase

    Deprecated since Container Cloud 2.23.0 (Cluster release 11.7.0) and will be removed in one of the following releases in favor of state. Possible values: Active, Failed, or Terminating.

  • reason

    Deprecated since Container Cloud 2.23.0 (Cluster release 11.7.0) and will be removed in one of the following releases in favor of messages. For the field description, see messages.

Configuration example:

status:
  phase: Failed
  state: ERR
  messages:
    - "ERR: The kaas-mgmt subnet in the terminating state."
  objCreated: 2021-10-21T19:09:32Z  by  v5.1.0-20210930-121522-f5b2af8
  objStatusUpdated: 2021-10-21T19:14:18.748114886Z  by  v5.1.0-20210930-121522-f5b2af8
  objUpdated: 2021-10-21T19:09:32.606968024Z  by  v5.1.0-20210930-121522-f5b2af8
Machine

This section describes the Machine resource used in Mirantis Container Cloud API for bare metal provider. The Machine resource describes the machine-level parameters.

For demonstration purposes, the Container Cloud Machine custom resource (CR) is split into the following major sections:

metadata

The Container Cloud Machine CR contains the following fields:

  • apiVersion

    API version of the object that is cluster.k8s.io/v1alpha1.

  • kind

    Object type that is Machine.

The metadata object field of the Machine resource contains the following fields:

  • name

    Name of the Machine object.

  • namespace

    Project in which the Machine object is created.

  • annotations

    Key-value pair to attach arbitrary metadata to the object:

    • metal3.io/BareMetalHost

      Annotation attached to the Machine object to reference the corresponding BareMetalHostInventory object in the <BareMetalHostProjectName/BareMetalHostName> format.

      Note

      Before update of the management cluster to Container Cloud 2.29.0 (Cluster release 16.4.0), instead of BareMetalHostInventory, use the BareMetalHost object. For details, see BareMetalHost.

      Caution

      While the Cluster release of the management cluster is 16.4.0, BareMetalHostInventory operations are allowed to m:kaas@management-admin only. Once the management cluster is updated to the Cluster release 16.4.1 (or later), this limitation will be lifted.

  • labels

    Key-value pairs that are attached to the object:

    • kaas.mirantis.com/provider

      Provider type that matches the provider type in the Cluster object and must be baremetal.

    • kaas.mirantis.com/region

      Region name that matches the region name in the Cluster object.

      Note

      The kaas.mirantis.com/region label is removed from all Container Cloud objects in 2.26.0 (Cluster releases 17.1.0 and 16.1.0). Therefore, do not add the label starting these releases. On existing clusters updated to these releases, or if manually added, this label will be ignored by Container Cloud.

    • cluster.sigs.k8s.io/cluster-name

      Cluster name that the Machine object is linked to.

    • cluster.sigs.k8s.io/control-plane

      For the control plane role of a machine, this label contains any value, for example, "true". For the worker role, this label is absent.

    Warning

    Labels and annotations that are not documented in this API Reference are generated automatically by Container Cloud. Do not modify them using the Container Cloud API.

Configuration example:

apiVersion: cluster.k8s.io/v1alpha1
kind: Machine
metadata:
  name: example-control-plane
  namespace: example-ns
  annotations:
    metal3.io/BareMetalHost: default/master-0
  labels:
    kaas.mirantis.com/provider: baremetal
    cluster.sigs.k8s.io/cluster-name: example-cluster
    cluster.sigs.k8s.io/control-plane: "true" # remove for worker
spec:providerSpec for instance configuration

The spec object field of the Machine object represents the BareMetalMachineProviderSpec subresource with all required details to create a bare metal instance. It contains the following fields:

  • apiVersion

    API version of the object that is baremetal.k8s.io/v1alpha1.

  • kind

    Object type that is BareMetalMachineProviderSpec.

  • bareMetalHostProfile

    Configuration profile of a bare metal host:

    • name

      Name of a bare metal host profile

    • namespace

      Project in which the bare metal host profile is created.

  • l2TemplateIfMappingOverride

    If specified, overrides the interface mapping value for the corresponding L2Template object.

  • l2TemplateSelector

    If specified, contains the name (first priority) or label of the L2 template that will be applied during a machine creation. The l2TemplateSelector field is copied from the Machine providerSpec object to the IpamHost object only once, during a machine creation. To modify l2TemplateSelector after creation of a Machine CR, edit the IpamHost object.

  • hostSelector

    Specifies the matching criteria for labels on the bare metal hosts. Limits the set of the BareMetalHostInventory objects considered for claiming for the Machine object. The following selector labels can be added when creating a machine using the Container Cloud web UI:

    • hostlabel.bm.kaas.mirantis.com/controlplane

    • hostlabel.bm.kaas.mirantis.com/worker

    • hostlabel.bm.kaas.mirantis.com/storage

    Any custom label that is assigned to one or more bare metal hosts using API can be used as a host selector. If the BareMetalHostInventory objects with the specified label are missing, the Machine object will not be deployed until at least one bare metal host with the specified label is available.

    Note

    Before update of the management cluster to Container Cloud 2.29.0 (Cluster release 16.4.0), instead of BareMetalHostInventory, use the BareMetalHost object. For details, see BareMetalHost.

    Caution

    While the Cluster release of the management cluster is 16.4.0, BareMetalHostInventory operations are allowed to m:kaas@management-admin only. Once the management cluster is updated to the Cluster release 16.4.1 (or later), this limitation will be lifted.

  • nodeLabels

    List of node labels to be attached to a node for the user to run certain components on separate cluster nodes. The list of allowed node labels is located in the Cluster object status providerStatus.releaseRef.current.allowedNodeLabels field.

    If the value field is not defined in allowedNodeLabels, a label can have any value.

    Before or after a machine deployment, add the required label from the allowed node labels list with the corresponding value to spec.providerSpec.value.nodeLabels in machine.yaml. For example:

    nodeLabels:
    - key: stacklight
      value: enabled
    

    The addition of a node label that is not available in the list of allowed node labels is restricted.

  • distribution Mandatory

    Specifies an operating system (OS) distribution ID that is present in the current ClusterRelease object under the AllowedDistributions list. When specified, the BareMetalHostInventory object linked to this Machine object will be provisioned using the selected OS distribution instead of the default one.

    By default, ubuntu/jammy is installed on greenfield managed clusters:

    • Since Container Cloud 2.28.0 (Cluster releases 17.3.0 and 16.3.0), for MOSK clusters

    • Since Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0), for non-MOSK clusters

    The default distribution is marked with the boolean flag default inside one of the elements under the AllowedDistributions list.

    The ubuntu/focal distribution was deprecated in Container Cloud 2.28.0 and only supported for existing managed clusters. Container Cloud 2.28.x release series is the last one to support Ubuntu 20.04 as the host operating system for managed clusters.

    Caution

    The outdated ubuntu/bionic distribution, which is removed in Cluster releases 17.0.0 and 16.0.0, is only supported for existing clusters based on Ubuntu 18.04. For greenfield deployments of managed clusters, only ubuntu/jammy is supported.

    Warning

    During the course of the Container Cloud 2.28.x series, Mirantis highly recommends upgrading an operating system on any nodes of all your managed cluster machines to Ubuntu 22.04 before the next major Cluster release becomes available.

    It is not mandatory to upgrade all machines at once. You can upgrade them one by one or in small batches, for example, if the maintenance window is limited in time.

    Otherwise, the Cluster release update of the Ubuntu 20.04-based managed clusters will become impossible as of Container Cloud 2.29.0 with Ubuntu 22.04 as the only supported version.

    Management cluster update to Container Cloud 2.29.1 will be blocked if at least one node of any related managed cluster is running Ubuntu 20.04.

  • maintenance

    Maintenance mode of a machine. If enabled, the node of the selected machine is drained, cordoned, and prepared for maintenance operations.

  • upgradeIndex (optional)

    Positive numeral value that determines the order of machines upgrade. The first machine to upgrade is always one of the control plane machines with the lowest upgradeIndex. Other control plane machines are upgraded one by one according to their upgrade indexes.

    If the Cluster spec dedicatedControlPlane field is false, worker machines are upgraded only after the upgrade of all control plane machines finishes. Otherwise, they are upgraded after the first control plane machine, concurrently with other control plane machines.

    If two or more machines have the same value of upgradeIndex, these machines are equally prioritized during upgrade.

  • deletionPolicy

    Generally available since Container Cloud 2.25.0 (Cluster releases 17.0.0 and 16.0.0). Technology Preview since 2.21.0 (Cluster releases 11.5.0 and 7.11.0) for non-MOSK clusters. Policy used to identify steps required during a Machine object deletion. Supported policies are as follows:

    • graceful

      Prepares a machine for deletion by cordoning, draining, and removing from Docker Swarm of the related node. Then deletes Kubernetes objects and associated resources. Can be aborted only before a node is removed from Docker Swarm.

    • unsafe

      Default. Deletes Kubernetes objects and associated resources without any preparations.

    • forced

      Deletes Kubernetes objects and associated resources without any preparations. Removes the Machine object even if the cloud provider or LCM Controller gets stuck at some step. May require a manual cleanup of machine resources in case of the controller failure.

    For more details on the workflow of machine deletion policies, see MOSK documentation: Overview of machine deletion policies.

Configuration example:

spec:
  ...
  providerSpec:
    value:
      apiVersion: baremetal.k8s.io/v1alpha1
      kind: BareMetalMachineProviderSpec
      bareMetalHostProfile:
        name: default
        namespace: default
      l2TemplateIfMappingOverride:
        - eno1
        - enp0s0
      l2TemplateSelector:
        label: l2-template1-label-1
      hostSelector:
        matchLabels:
          kaas.mirantis.com/baremetalhost-id: hw-master-0
      kind: BareMetalMachineProviderSpec
      nodeLabels:
      - key: stacklight
        value: enabled
      distribution: ubuntu/jammy
      delete: false
      deletionPolicy: graceful
Machine status

The status object field of the Machine object represents the BareMetalMachineProviderStatus subresource that describes the current bare metal instance state and contains the following fields:

  • apiVersion

    API version of the object that is cluster.k8s.io/v1alpha1.

  • kind

    Object type that is BareMetalMachineProviderStatus.

  • hardware

    Provides a machine hardware information:

    • cpu

      Number of CPUs.

    • ram

      RAM capacity in GB.

    • storage

      List of hard drives mounted on the machine. Contains the disk name and size in GB.

  • status

    Represents the current status of a machine:

    • Provision

      A machine is yet to obtain a status

    • Uninitialized

      A machine is yet to obtain the node IP address and host name

    • Pending

      A machine is yet to receive the deployment instructions and it is either not booted yet or waits for the LCM controller to be deployed

    • Prepare

      A machine is running the Prepare phase during which Docker images and packages are being predownloaded

    • Deploy

      A machine is processing the LCM Controller instructions

    • Reconfigure

      A machine is being updated with a configuration without affecting workloads running on the machine

    • Ready

      A machine is deployed and the supported Mirantis Kubernetes Engine (MKE) version is set

    • Maintenance

      A machine host is cordoned, drained, and prepared for maintenance operations

  • currentDistribution Since 2.24.0 as TechPreview and 2.24.2 as GA

    Distribution ID of the current operating system installed on the machine. For example, ubuntu/jammy.

  • maintenance

    Maintenance mode of a machine. If enabled, the node of the selected machine is drained, cordoned, and prepared for maintenance operations.

  • reboot Available since 2.22.0

    Indicator of a host reboot to complete the Ubuntu operating system updates, if any.

    • required

      Specifies whether a host reboot is required. Boolean. If true, a manual host reboot is required.

    • reason

      Specifies the package name(s) to apply during a host reboot.

  • upgradeIndex

    Positive numeral value that determines the order of machines upgrade. If upgradeIndex in the Machine object spec is set, this status value equals the one in the spec. Otherwise, this value displays the automatically generated order of upgrade.

  • delete

    Generally available since Container Cloud 2.25.0 (Cluster releases 17.0.0 and 16.0.0). Technology Preview since 2.21.0 for non-MOSK clusters. Start of a machine deletion or a successful abortion. Boolean.

  • prepareDeletionPhase

    Generally available since Container Cloud 2.25.0 (Cluster releases 17.0.0 and 16.0.0). Technology Preview since 2.21.0 for non-MOSK clusters. Preparation phase for a graceful machine deletion. Possible values are as follows:

    • started

      Cloud provider controller prepares a machine for deletion by cordoning, draining the machine, and so on.

    • completed

      LCM Controller starts removing the machine resources since the preparation for deletion is complete.

    • aborting

      Cloud provider controller attempts to uncordon the node. If the attempt fails, the status changes to failed.

    • failed

      Error in the deletion workflow.

    For the workflow description of a graceful deletion, see MOSK documentation: Overview of machine deletion policies.

Configuration example:

status:
  providerStatus:
    apiVersion: baremetal.k8s.io/v1alpha1
    kind: BareMetalMachineProviderStatus
    hardware:
      cpu: 11
      ram: 16
    storage:
      - name: /dev/vda
        size: 61
      - name: /dev/vdb
        size: 32
      - name: /dev/vdc
        size: 32
    reboot:
      required: true
      reason: |
        linux-image-5.13.0-51-generic
        linux-base
    status: Ready
    upgradeIndex: 1
MetalLBConfig

TechPreview since 2.21.0 and 2.21.1 for MOSK 22.5 GA since 2.24.0 for management and regional clusters GA since 2.25.0 for managed clusters

This section describes the MetalLBConfig custom resource used in the Container Cloud API that contains the MetalLB configuration objects for a particular cluster.

For demonstration purposes, the Container Cloud MetalLBConfig custom resource description is split into the following major sections:

The Container Cloud API also uses the third-party open source MetalLB API. For details, see MetalLB objects.

MetalLBConfig metadata

The Container Cloud MetalLBConfig CR contains the following fields:

  • apiVersion

    API version of the object that is kaas.mirantis.com/v1alpha1.

  • kind

    Object type that is MetalLBConfig.

The metadata object field of the MetalLBConfig resource contains the following fields:

  • name

    Name of the MetalLBConfig object.

  • namespace

    Project in which the object was created. Must match the project name of the target cluster.

  • labels

    Key-value pairs attached to the object. Mandatory labels:

    • kaas.mirantis.com/provider

      Provider type that is baremetal.

    • kaas.mirantis.com/region

      Region name that matches the region name of the target cluster.

      Note

      The kaas.mirantis.com/region label is removed from all Container Cloud objects in 2.26.0 (Cluster releases 17.1.0 and 16.1.0). Therefore, do not add the label starting these releases. On existing clusters updated to these releases, or if manually added, this label will be ignored by Container Cloud.

    • cluster.sigs.k8s.io/cluster-name

      Name of the cluster that the MetalLB configuration must apply to.

    Warning

    Labels and annotations that are not documented in this API Reference are generated automatically by Container Cloud. Do not modify them using the Container Cloud API.

Configuration example:

apiVersion: kaas.mirantis.com/v1alpha1
kind: MetalLBConfig
metadata:
  name: metallb-demo
  namespace: test-ns
  labels:
    kaas.mirantis.com/provider: baremetal
    cluster.sigs.k8s.io/cluster-name: test-cluster
MetalLBConfig spec

The spec field of the MetalLBConfig object represents the MetalLBConfigSpec subresource that contains the description of MetalLB configuration objects. These objects are created in the target cluster during its deployment.

The spec field contains the following optional fields:

  • addressPools

    Removed in Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0), deprecated in 2.26.0 (Cluster releases 17.2.0 and 16.2.0).

    List of MetalLBAddressPool objects to create MetalLB AddressPool objects.

  • bfdProfiles

    List of MetalLBBFDProfile objects to create MetalLB BFDProfile objects.

  • bgpAdvertisements

    List of MetalLBBGPAdvertisement objects to create MetalLB BGPAdvertisement objects.

  • bgpPeers

    List of MetalLBBGPPeer objects to create MetalLB BGPPeer objects.

  • communities

    List of MetalLBCommunity objects to create MetalLB Community objects.

  • ipAddressPools

    List of MetalLBIPAddressPool objects to create MetalLB IPAddressPool objects.

  • l2Advertisements

    List of MetalLBL2Advertisement objects to create MetalLB L2Advertisement objects.

    The l2Advertisements object allows defining interfaces to optimize the announcement. When you use the interfaces selector, LB addresses are announced only on selected host interfaces.

    Mirantis recommends using the interfaces selector if nodes use separate host networks for different types of traffic. The pros of such configuration are as follows: less spam on other interfaces and networks and limited chances to reach IP addresses of load-balanced services from irrelevant interfaces and networks.

    Caution

    Interface names in the interfaces list must match those on the corresponding nodes.

  • templateName

    Unsupported since 2.28.0 (17.3.0 and 16.3.0). Available since 2.24.0 (14.0.0). For details, see MOSK Deprecation Notes: MetalLBConfigTemplate resource management.

    Name of the MetalLBConfigTemplate object used as a source of MetalLB configuration objects. Mutually exclusive with the fields listed below that will be part of the MetalLBConfigTemplate object. For details, see MetalLBConfigTemplate.

    Before Cluster releases 17.2.0 and 16.2.0, MetalLBConfigTemplate is the default configuration method for MetalLB on bare metal deployments. Since Cluster releases 17.2.0 and 16.2.0, use the MetalLBConfig object instead.

    Caution

    For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

    Caution

    For managed clusters, this field is available as Technology Preview since Container Cloud 2.24.0, is generally available since 2.25.0, and is deprecated since 2.27.0.


The objects listed in the spec field of the MetalLBConfig object, such as MetalLBIPAddressPool, MetalLBL2Advertisement, and so on, are used as templates for the MetalLB objects that will be created in the target cluster. Each of these objects has the following structure:

  • labels

    Optional. Key-value pairs attached to the metallb.io/<objectName> object as metadata.labels.

  • name

    Name of the metallb.io/<objectName> object.

  • spec

    Contents of the spec section of the metallb.io/<objectName> object. The spec field has the metallb.io/<objectName>Spec type. For details, see MetalLB objects.

For example, MetalLBIPAddressPool is a template for the metallb.io/IPAddressPool object and has the following structure:

  • labels

    Optional. Key-value pairs attached to the metallb.io/IPAddressPool object as metadata.labels.

  • name

    Name of the metallb.io/IPAddressPool object.

  • spec

    Contents of spec section of the metallb.io/IPAddressPool object. The spec has the metallb.io/IPAddressPoolSpec type.

MetalLB objects

Container Cloud supports the following MetalLB object types of the metallb.io API group:

  • IPAddressPool

  • Community

  • L2Advertisement

  • BFDProfile

  • BGPAdvertisement

  • BGPPeer

As of v1beta1 and v1beta2 API versions, metadata of MetalLB objects has a standard format with no specific fields or labels defined for any particular object:

  • apiVersion

    API version of the object that can be metallb.io/v1beta1 or metallb.io/v1beta2.

  • kind

    Object type that is one of the metallb.io types listed above. For example, IPAddressPool.

  • metadata

    Object metadata that contains the following subfields:

    • name

      Name of the object.

    • namespace

      Namespace where the MetalLB components are located. It matches metallb-system in Container Cloud.

    • labels

      Optional. Key-value pairs that are attached to the object. It can be an arbitrary set of labels. No special labels are defined as of v1beta1 and v1beta2 API versions.

The MetalLBConfig object contains spec sections of the metallb.io/<objectName> objects that have the metallb.io/<objectName>Spec type. For metallb.io/<objectName> and metallb.io/<objectName>Spec types definitions, refer to the official MetalLB documentation:

Note

Before Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0), metallb.io/<objectName> objects v0.13.9 are supported.

The l2Advertisements object allows defining interfaces to optimize the announcement. When you use the interfaces selector, LB addresses are announced only on selected host interfaces. Mirantis recommends this configuration if nodes use separate host networks for different types of traffic. The pros of such configuration are as follows: less spam on other interfaces and networks, limited chances to reach services LB addresses from irrelevant interfaces and networks.

Configuration example:

l2Advertisements: |
  - name: management-lcm
    spec:
      ipAddressPools:
        - default
      interfaces:
        # LB addresses from the "default" address pool will be announced
        # on the "k8s-lcm" interface
        - k8s-lcm

Caution

Interface names in the interfaces list must match those on the corresponding nodes.

MetalLBConfig status

Available since 2.24.0 for management clusters

Caution

For managed clusters, this field is available as Technology Preview and is generally available since Container Cloud 2.25.0.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

The status field describes the actual state of the object. It contains the following fields:

  • bootstrapMode Only in 2.24.0

    Field that appears only during a management cluster bootstrap as true and is used internally for bootstrap. Once deployment completes, the value is moved to false and is excluded from the status output.

  • objects

    Description of MetalLB objects that is used to create MetalLB native objects in the target cluster.

    The format of underlying objects is the same as for those in the spec field, except templateName, which is obsolete since Container Cloud 2.28.0 (Cluster releases 17.3.0 and 16.3.0) and which is not present in this field. The objects contents are rendered from the following locations, with possible modifications for the bootstrap cluster:

    • Since Container Cloud 2.28.0 (Cluster releases 17.3.0 and 16.3.0), MetalLBConfig.spec

    • Before Container Cloud 2.28.0 (Cluster releases 17.2.0, 16.2.0, or earlier):

      • MetalLBConfigTemplate.status of the corresponding template if MetalLBConfig.spec.templateName is defined

      • MetalLBConfig.spec if MetalLBConfig.spec.templateName is not defined

  • propagateResult

    Result of objects propagation. During objects propagation, native MetalLB objects of the target cluster are created and updated according to the description of the objects present in the status.objects field.

    This field contains the following information:

    • message

      Text message that describes the result of the last attempt of objects propagation. Contains an error message if the last attempt was unsuccessful.

    • success

      Result of the last attempt of objects propagation. Boolean.

    • time

      Timestamp of the last attempt of objects propagation. For example, 2023-07-04T00:30:36Z.

    If the objects propagation was successful, the MetalLB objects of the target cluster match the ones present in the status.objects field.

  • updateResult

    Status of the MetalLB objects update. Has the same format of subfields that in propagateResult described above.

    During objects update, the status.objects contents are rendered as described in the objects field definition above.

    If the objects update was successful, the MetalLB objects description present in status.objects is rendered successfully and up to date. This description is used to update MetalLB objects in the target cluster. If the objects update was not successful, MetalLB objects will not be propagated to the target cluster.

MetalLB configuration examples

Example of configuration template for using L2 announcements:

apiVersion: kaas.mirantis.com/v1alpha1
kind: MetalLBConfig
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: managed-cluster
    kaas.mirantis.com/provider: baremetal
  name: managed-l2
  namespace: managed-ns
spec:
  ipAddressPools:
    - name: services
      spec:
        addresses:
          - 10.100.91.151-10.100.91.170
        autoAssign: true
        avoidBuggyIPs: false
  l2Advertisements:
    - name: services
      spec:
        ipAddressPools:
        - services

Example of configuration extract for using the interfaces selector, which enables announcement of LB addresses only on selected host interfaces:

l2Advertisements:
  - name: services
    spec:
      ipAddressPools:
      - default
      interfaces:
      - k8s-lcm

Caution

Interface names in the interfaces list must match the ones on the corresponding nodes.

After the object is created and processed by the MetalLB Controller, the status field is added. For example:

status:
  objects:
    ipAddressPools:
    - name: services
      spec:
        addresses:
        - 10.100.100.151-10.100.100.170
        autoAssign: true
        avoidBuggyIPs: false
    l2Advertisements:
      - name: services
        spec:
          ipAddressPools:
          - services
  propagateResult:
    message: Objects were successfully updated
    success: true
    time: "2023-07-04T14:31:40Z"
  updateResult:
    message: Objects were successfully read from MetalLB configuration specification
    success: true
    time: "2023-07-04T14:31:39Z"

Example of native MetalLB objects to be created in the managed-ns/managed-cluster cluster during deployment:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: services
  namespace: metallb-system
spec:
  addresses:
  - 10.100.91.151-10.100.91.170
  autoAssign: true
  avoidBuggyIPs: false
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: services
  namespace: metallb-system
spec:
  ipAddressPools:
  - services

Example of configuration template for using BGP announcements:

apiVersion: kaas.mirantis.com/v1alpha1
kind: MetalLBConfig
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: managed-cluster
    kaas.mirantis.com/provider: baremetal
  name: managed-bgp
  namespace: managed-ns
spec:
  bgpPeers:
    - name: bgp-peer-rack1
      spec:
        peerAddress: 10.0.41.1
        peerASN: 65013
        myASN: 65099
        nodeSelectors:
          - matchLabels:
              rack-id: rack1
    - name: bgp-peer-rack2
      spec:
        peerAddress: 10.0.42.1
        peerASN: 65023
        myASN: 65099
        nodeSelectors:
          - matchLabels:
              rack-id: rack2
    - name: bgp-peer-rack3
      spec:
        peerAddress: 10.0.43.1
        peerASN: 65033
        myASN: 65099
        nodeSelectors:
          - matchLabels:
              rack-id: rack3
  ipAddressPools:
    - name: services
      spec:
        addresses:
          - 10.100.191.151-10.100.191.170
        autoAssign: true
        avoidBuggyIPs: false
  bgpAdvertisements:
    - name: services
      spec:
        ipAddressPools:
        - services
MetalLBConfigTemplate

Unsupported since 2.28.0 (17.3.0 and 16.3.0)

Warning

The MetalLBConfigTemplate object may not work as expected due to its deprecation. For details, see MOSK Deprecation Notes: MetalLBConfigTemplate resource management.

Support status of MetalLBConfigTemplate

Container Cloud release

Cluster release

Support status

2.29.0

17.4.0 and 16.4.0

Admission Controller blocks creation of the object

2.28.0

17.3.0 and 16.3.0

Unsupported for any cluster type

2.27.0

17.2.0 and 16.2.0

Deprecated for any cluster type

2.25.0

17.0.0 and 16.0.0

Generally available for managed clusters

2.24.2

15.0.1, 14.0.1, 14.0.0

Technology Preview for managed clusters

2.24.0

14.0.0

Generally available for management clusters

This section describes the MetalLBConfigTemplate custom resource used in the Container Cloud API that contains the template for MetalLB configuration for a particular cluster.

Note

The MetalLBConfigTemplate object applies to bare metal deployments only.

Before Cluster releases 17.2.0 and 16.2.0, MetalLBConfigTemplate is the default configuration method for MetalLB on bare metal deployments. This method allows the use of Subnet objects to define MetalLB IP address pools the same way as they were used before introducing the MetalLBConfig and MetalLBConfigTemplate objects. Since Cluster releases 17.2.0 and 16.2.0, use the MetalLBConfig object for this purpose instead.

For demonstration purposes, the Container Cloud MetalLBConfigTemplate custom resource description is split into the following major sections:

MetalLBConfigTemplate metadata

The Container Cloud MetalLBConfigTemplate CR contains the following fields:

  • apiVersion

    API version of the object that is ipam.mirantis.com/v1alpha1.

  • kind

    Object type that is MetalLBConfigTemplate.

The metadata object field of the MetalLBConfigTemplate resource contains the following fields:

  • name

    Name of the MetalLBConfigTemplate object.

  • namespace

    Project in which the object was created. Must match the project name of the target cluster.

  • labels

    Key-value pairs attached to the object. Mandatory labels:

    • kaas.mirantis.com/provider

      Provider type that is baremetal.

    • kaas.mirantis.com/region

      Region name that matches the region name of the target cluster.

      Note

      The kaas.mirantis.com/region label is removed from all Container Cloud objects in 2.26.0 (Cluster releases 17.1.0 and 16.1.0). Therefore, do not add the label starting these releases. On existing clusters updated to these releases, or if manually added, this label will be ignored by Container Cloud.

    • cluster.sigs.k8s.io/cluster-name

      Name of the cluster that the MetalLB configuration applies to.

    Warning

    Labels and annotations that are not documented in this API Reference are generated automatically by Container Cloud. Do not modify them using the Container Cloud API.

Configuration example:

apiVersion: ipam.mirantis.com/v1alpha1
kind: MetalLBConfigTemplate
metadata:
  name: metallb-demo
  namespace: test-ns
  labels:
    kaas.mirantis.com/provider: baremetal
    cluster.sigs.k8s.io/cluster-name: test-cluster
MetalLBConfigTemplate spec

The spec field of the MetalLBConfigTemplate object contains the templates of MetalLB configuration objects and optional auxiliary variables. Container Cloud uses these templates to create MetalLB configuration objects during the cluster deployment.

The spec field contains the following optional fields:

  • machines

    Key-value dictionary to select IpamHost objects corresponding to nodes of the target cluster. Keys contain machine aliases used in spec.templates. Values contain the NameLabelsSelector items that select IpamHost by name or by labels. For example:

    machines:
      control1:
        name: mosk-control-uefi-0
      worker1:
        labels:
          uid: kaas-node-4003a5f6-2667-40e3-aa64-ebe713a8a7ba
    

    This field is required if some IP addresses of nodes are used in spec.templates.

  • vars

    Key-value dictionary of arbitrary user-defined variables that are used in spec.templates. For example:

    vars:
      localPort: 4561
    
  • templates

    List of templates for MetalLB configuration objects that are used to render MetalLB configuration definitions and create MetalLB objects in the target cluster. Contains the following optional fields:

    • bfdProfiles

      Template for the MetalLBBFDProfile object list to create MetalLB BFDProfile objects.

    • bgpAdvertisements

      Template for the MetalLBBGPAdvertisement object list to create MetalLB BGPAdvertisement objects.

    • bgpPeers

      Template for the MetalLBBGPPeer object list to create MetalLB BGPPeer objects.

    • communities

      Template for the MetalLBCommunity object list to create MetalLB Community objects.

    • ipAddressPools

      Template for the MetalLBIPAddressPool object list to create MetalLB IPAddressPool objects.

    • l2Advertisements

      Template for the MetalLBL2Advertisement object list to create MetalLB L2Advertisement objects.

    Each template is a string and has the same structure as the list of the corresponding objects described in MetalLBConfig spec such as MetalLBIPAddressPool and MetalLBL2Advertisement, but you can use additional functions and variables inside these templates.

    Note

    When using the MetalLBConfigTemplate object, you can define MetalLB IP address pools using both Subnet objects and spec.ipAddressPools templates. IP address pools rendered from these sources will be concatenated and then written to status.renderedObjects.ipAddressPools.

    You can use the following functions in templates:

    • ipAddressPoolNames

      Selects all IP address pools of the given announcement type found for the target cluster. Possible types: layer2, bgp, any.

      The any type includes all IP address pools found for the target cluster. The announcement types of IP address pools are verified using the metallb/address-pool-protocol labels of the corresponding Subnet object.

      The ipAddressPools templates have no types as native MetalLB IPAddressPool objects have no announcement type.

      The l2Advertisements template can refer to IP address pools of the layer2 or any type.

      The bgpAdvertisements template can refer to IP address pools of the bgp or any type.

      IP address pools are searched in the templates.ipAddressPools field and in the Subnet objects of the target cluster. For example:

      l2Advertisements: |
        - name: l2services
          spec:
            ipAddressPools: {{ipAddressPoolNames "layer2"}}
      
      bgpAdvertisements: |
        - name: l3services
          spec:
            ipAddressPools: {{ipAddressPoolNames "bgp"}}
      
      l2Advertisements: |
        - name: any
          spec:
            ipAddressPools: {{ipAddressPoolNames "any"}}
      
      bgpAdvertisements: |
        - name: any
          spec:
            ipAddressPools: {{ipAddressPoolNames "any"}}
      

    The l2Advertisements object allows defining interfaces to optimize the announcement. When you use the interfaces selector, LB addresses are announced only on selected host interfaces. Mirantis recommends this configuration if nodes use separate host networks for different types of traffic. The pros of such configuration are as follows: less spam on other interfaces and networks, limited chances to reach services LB addresses from irrelevant interfaces and networks.

    Configuration example:

    l2Advertisements: |
      - name: management-lcm
        spec:
          ipAddressPools:
            - default
          interfaces:
            # LB addresses from the "default" address pool will be announced
            # on the "k8s-lcm" interface
            - k8s-lcm
    

    Caution

    Interface names in the interfaces list must match those on the corresponding nodes.

MetalLBConfigTemplate status

The status field describes the actual state of the object. It contains the following fields:

  • renderedObjects

    MetalLB objects description rendered from spec.templates in the same format as they are defined in the MetalLBConfig spec field.

    All underlying objects are optional. The following objects can be present: bfdProfiles, bgpAdvertisements, bgpPeers, communities, ipAddressPools, l2Advertisements.

  • state Since 2.23.0

    Message that reflects the current status of the resource. The list of possible values includes the following:

    • OK - object is operational.

    • ERR - object is non-operational. This status has a detailed description in the messages list.

    • TERM - object was deleted and is terminating.

  • messages Since 2.23.0

    List of error or warning messages if the object state is ERR.

  • objCreated

    Date, time, and IPAM version of the resource creation.

  • objStatusUpdated

    Date, time, and IPAM version of the last update of the status field in the resource.

  • objUpdated

    Date, time, and IPAM version of the last resource update.

MetalLB configuration examples

The following examples contain configuration templates that include MetalLBConfigTemplate.

Configuration example for using L2 (ARP) announcement
Configuration example for MetalLBConfig
apiVersion: kaas.mirantis.com/v1alpha1
kind: MetalLBConfig
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: kaas-mgmt
    kaas.mirantis.com/provider: baremetal
  name: mgmt-l2
  namespace: default
spec:
  templateName: mgmt-metallb-template
Configuration example for MetalLBConfigTemplate
apiVersion: ipam.mirantis.com/v1alpha1
kind: MetalLBConfigTemplate
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: kaas-mgmt
    kaas.mirantis.com/provider: baremetal
  name: mgmt-metallb-template
  namespace: default
spec:
  templates:
    l2Advertisements: |
      - name: management-lcm
        spec:
          ipAddressPools:
            - default
          interfaces:
            # IPs from the "default" address pool will be announced on the "k8s-lcm" interface
            - k8s-lcm
      - name: provision-pxe
        spec:
          ipAddressPools:
            - services-pxe
          interfaces:
            # IPs from the "services-pxe" address pool will be announced on the "k8s-pxe" interface
            - k8s-pxe
Configuration example for Subnet of the default pool
apiVersion: ipam.mirantis.com/v1alpha1
kind: Subnet
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: kaas-mgmt
    ipam/SVC-MetalLB: ""
    kaas.mirantis.com/provider: baremetal
    metallb/address-pool-auto-assign: "true"
    metallb/address-pool-name: default
    metallb/address-pool-protocol: layer2
  name: master-lb-default
  namespace: default
spec:
  cidr: 10.0.34.0/24
  includeRanges:
  - 10.0.34.101-10.0.34.120
Configuration example for Subnet of the services-pxe pool
apiVersion: ipam.mirantis.com/v1alpha1
kind: Subnet
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: kaas-mgmt
    ipam/SVC-MetalLB: ""
    kaas.mirantis.com/provider: baremetal
    metallb/address-pool-auto-assign: "false"
    metallb/address-pool-name: services-pxe
    metallb/address-pool-protocol: layer2
  name: master-lb-pxe
  namespace: default
spec:
  cidr: 10.0.24.0/24
  includeRanges:
  - 10.0.24.221-10.0.24.230

After the objects are created and processed by the kaas-ipam Controller, the status field displays for MetalLBConfigTemplate:

Configuration example of the status field for MetalLBConfigTemplate
status:
  checksums:
    annotations: sha256:38e0b9de817f645c4bec37c0d4a3e58baecccb040f5718dc069a72c7385a0bed
    labels: sha256:380337902278e8985e816978c349910a4f7ed98169c361eb8777411ac427e6ba
    spec: sha256:0860790fc94217598e0775ab2961a02acc4fba820ae17c737b94bb5d55390dbe
  messages:
  - Template for BFDProfiles is undefined
  - Template for BGPAdvertisements is undefined
  - Template for BGPPeers is undefined
  - Template for Communities is undefined
  objCreated: 2023-06-30T21:22:56.00000Z  by  v6.5.999-20230627-072014-ba8d918
  objStatusUpdated: 2023-07-04T00:30:35.82023Z  by  v6.5.999-20230627-072014-ba8d918
  objUpdated: 2023-06-30T22:10:51.73822Z  by  v6.5.999-20230627-072014-ba8d918
  renderedObjects:
    ipAddressPools:
    - name: default
      spec:
        addresses:
        - 10.0.34.101-10.0.34.120
        autoAssign: true
    - name: services-pxe
      spec:
        addresses:
        - 10.0.24.221-10.0.24.230
        autoAssign: false
    l2Advertisements:
    - name: management-lcm
      spec:
        interfaces:
        - k8s-lcm
        ipAddressPools:
        - default
    - name: provision-pxe
      spec:
        interfaces:
        - k8s-pxe
        ipAddressPools:
        - services-pxe
  state: OK

The following example illustrates contents of the status field that displays for MetalLBConfig after the objects are processed by the MetalLB Controller.

Configuration example of the status field for MetalLBConfig
status:
  objects:
    ipAddressPools:
    - name: default
      spec:
        addresses:
        - 10.0.34.101-10.0.34.120
        autoAssign: true
        avoidBuggyIPs: false
    - name: services-pxe
      spec:
        addresses:
        - 10.0.24.221-10.0.24.230
        autoAssign: false
        avoidBuggyIPs: false
    l2Advertisements:
    - name: management-lcm
      spec:
        interfaces:
        - k8s-lcm
        ipAddressPools:
        - default
    - name: provision-pxe
      spec:
        interfaces:
        - k8s-pxe
        ipAddressPools:
        - services-pxe
  propagateResult:
    message: Objects were successfully updated
    success: true
    time: "2023-07-05T03:10:23Z"
  updateResult:
    message: Objects were successfully read from MetalLB configuration specification
    success: true
    time: "2023-07-05T03:10:23Z"

Using the objects described above, several native MetalLB objects are created in the kaas-mgmt cluster during deployment.

Configuration example of MetalLB objects created during cluster deployment
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: management-lcm
  namespace: metallb-system
spec:
  interfaces:
  - k8s-lcm
  ipAddressPools:
  - default

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: provision-pxe
  namespace: metallb-system
spec:
  interfaces:
  - k8s-pxe
  ipAddressPools:
  - services-pxe

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: default
  namespace: metallb-system
spec:
  addresses:
  - 10.0.34.101-10.0.34.120
  autoAssign: true
  avoidBuggyIPs: false

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: services-pxe
  namespace: metallb-system
spec:
  addresses:
  - 10.0.24.221-10.0.24.230
  autoAssign: false
  avoidBuggyIPs: false
Configuration example for using BGP announcement

In the following configuration example, MetalLB is configured to use BGP for announcement of external addresses of Kubernetes load-balanced services for the managed cluster from master nodes. Each master node is located in its own rack without the L2 layer extension between racks.

This section contains only examples of the objects required to illustrate the MetalLB configuration. For Rack, MultiRackCluster, L2Template and other objects required to configure BGP announcement of the cluster API load balancer address for this scenario, refer to Multiple rack configuration example.

Configuration example for MetalLBConfig
apiVersion: kaas.mirantis.com/v1alpha1
kind: MetalLBConfig
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
  name: test-cluster-metallb-bgp
  namespace: managed-ns
spec:
  templateName: test-cluster-metallb-bgp-template
Configuration example for MetalLBConfigTemplate
apiVersion: ipam.mirantis.com/v1alpha1
kind: MetalLBConfigTemplate
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
  name: test-cluster-metallb-bgp-template
  namespace: managed-ns
spec:
  templates:
    bgpAdvertisements: |
      - name: services
        spec:
          ipAddressPools:
            - services
          peers:            # "peers" can be omitted if all defined peers
          - svc-peer-rack1  # are used in a particular "bgpAdvertisement"
          - svc-peer-rack2
          - svc-peer-rack3
    bgpPeers: |
      - name: svc-peer-rack1
        spec:
          peerAddress: 10.77.41.1  # peer address is in the external subnet #1
          peerASN: 65100
          myASN: 65101
          nodeSelectors:
            - matchLabels:
                rack-id: rack-master-1  # references the node corresponding
                                        # to the "test-cluster-master-1" Machine
      - name: svc-peer-rack2
        spec:
          peerAddress: 10.77.42.1  # peer address is in the external subnet #2
          peerASN: 65100
          myASN: 65101
          nodeSelectors:
            - matchLabels:
                rack-id: rack-master-2  # references the node corresponding
                                        # to the "test-cluster-master-2" Machine
      - name: svc-peer-rack3
        spec:
          peerAddress: 10.77.43.1  # peer address is in the external subnet #3
          peerASN: 65100
          myASN: 65101
          nodeSelectors:
            - matchLabels:
                rack-id: rack-master-3  # references the node corresponding
                                        # to the "test-cluster-master-3" Machine
Configuration example for Subnet
apiVersion: ipam.mirantis.com/v1alpha1
kind: Subnet
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    ipam/SVC-MetalLB: ""
    kaas.mirantis.com/provider: baremetal
    metallb/address-pool-auto-assign: "true"
    metallb/address-pool-name: services
    metallb/address-pool-protocol: bgp
  name: test-cluster-lb
  namespace: managed-ns
spec:
  cidr: 134.33.24.0/24
  includeRanges:
    - 134.33.24.221-134.33.24.240

The following objects illustrate configuration for three subnets that are used to configure external network in three racks. Each master node uses its own external L2/L3 network segment.

Configuration example for the Subnet ext-rack-control-1
apiVersion: ipam.mirantis.com/v1alpha1
kind: Subnet
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
  name: ext-rack-control-1
  namespace: managed-ns
spec:
  cidr: 10.77.41.0/28
  gateway: 10.77.41.1
  includeRanges:
    - 10.77.41.3-10.77.41.13
  nameservers:
    - 1.2.3.4
Configuration example for the Subnet ext-rack-control-2
apiVersion: ipam.mirantis.com/v1alpha1
kind: Subnet
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
  name: ext-rack-control-2
  namespace: managed-ns
spec:
  cidr: 10.77.42.0/28
  gateway: 10.77.42.1
  includeRanges:
    - 10.77.42.3-10.77.42.13
  nameservers:
    - 1.2.3.4
Configuration example for the Subnet ext-rack-control-3
apiVersion: ipam.mirantis.com/v1alpha1
kind: Subnet
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
  name: ext-rack-control-3
  namespace: managed-ns
spec:
  cidr: 10.77.43.0/28
  gateway: 10.77.43.1
  includeRanges:
    - 10.77.43.3-10.77.43.13
  nameservers:
    - 1.2.3.4

Rack objects and ipam/RackRef labels in Machine objects are not required for MetalLB configuration. But in this example, rack objects are implied to be used for configuration of BGP announcement of the cluster API load balancer address. Rack objects are not present in this example.

Machine objects select different L2 templates because each master node uses different L2/L3 network segments for LCM, external, and other networks.

Configuration example for the Machine test-cluster-master-1
apiVersion: cluster.k8s.io/v1alpha1
kind: Machine
metadata:
  name: test-cluster-master-1
  namespace: managed-ns
  annotations:
    metal3.io/BareMetalHost: managed-ns/test-cluster-master-1
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    cluster.sigs.k8s.io/control-plane: controlplane
    hostlabel.bm.kaas.mirantis.com/controlplane: controlplane
    ipam/RackRef: rack-master-1
    kaas.mirantis.com/provider: baremetal
spec:
  providerSpec:
    value:
      kind: BareMetalMachineProviderSpec
      apiVersion: baremetal.k8s.io/v1alpha1
      hostSelector:
        matchLabels:
          kaas.mirantis.com/baremetalhost-id: test-cluster-master-1
      l2TemplateSelector:
        name: test-cluster-master-1
      nodeLabels:
      - key: rack-id          # it is used in "nodeSelectors"
        value: rack-master-1  # of "bgpPeer" MetalLB objects
Configuration example for the Machine test-cluster-master-2
apiVersion: cluster.k8s.io/v1alpha1
kind: Machine
metadata:
  name: test-cluster-master-2
  namespace: managed-ns
  annotations:
    metal3.io/BareMetalHost: managed-ns/test-cluster-master-2
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    cluster.sigs.k8s.io/control-plane: controlplane
    hostlabel.bm.kaas.mirantis.com/controlplane: controlplane
    ipam/RackRef: rack-master-2
    kaas.mirantis.com/provider: baremetal
spec:
  providerSpec:
    value:
      kind: BareMetalMachineProviderSpec
      apiVersion: baremetal.k8s.io/v1alpha1
      hostSelector:
        matchLabels:
          kaas.mirantis.com/baremetalhost-id: test-cluster-master-2
      l2TemplateSelector:
        name: test-cluster-master-2
      nodeLabels:
      - key: rack-id          # it is used in "nodeSelectors"
        value: rack-master-1  # of "bgpPeer" MetalLB objects
Configuration example for the Machine test-cluster-master-2
apiVersion: cluster.k8s.io/v1alpha1
kind: Machine
metadata:
  name: test-cluster-master-3
  namespace: managed-ns
  annotations:
    metal3.io/BareMetalHost: managed-ns/test-cluster-master-3
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    cluster.sigs.k8s.io/control-plane: controlplane
    hostlabel.bm.kaas.mirantis.com/controlplane: controlplane
    ipam/RackRef: rack-master-3
    kaas.mirantis.com/provider: baremetal
spec:
  providerSpec:
    value:
      kind: BareMetalMachineProviderSpec
      apiVersion: baremetal.k8s.io/v1alpha1
      hostSelector:
        matchLabels:
          kaas.mirantis.com/baremetalhost-id: test-cluster-master-3
      l2TemplateSelector:
        name: test-cluster-master-3
      nodeLabels:
      - key: rack-id          # it is used in "nodeSelectors"
        value: rack-master-3  # of "bgpPeer" MetalLB objects
MultiRackCluster

TechPreview Available since 2.24.4

This section describes the MultiRackCluster resource used in the Container Cloud API.

When you create a bare metal managed cluster with a multi-rack topology, where Kubernetes masters are distributed across multiple racks without L2 layer extension between them, the MultiRackCluster resource allows you to set cluster-wide parameters for configuration of the BGP announcement of the cluster API load balancer address. In this scenario, the MultiRackCluster object must be bound to the Cluster object.

The MultiRackCluster object is generally used for a particular cluster in conjunction with Rack objects described in Rack.

For demonstration purposes, the Container Cloud MultiRackCluster custom resource (CR) description is split into the following major sections:

MultiRackCluster metadata

The Container Cloud MultiRackCluster CR metadata contains the following fields:

  • apiVersion

    API version of the object that is ipam.mirantis.com/v1alpha1.

  • kind

    Object type that is MultiRackCluster.

  • metadata

    The metadata field contains the following subfields:

    • name

      Name of the MultiRackCluster object.

    • namespace

      Container Cloud project (Kubernetes namespace) in which the object was created.

    • labels

      Key-value pairs that are attached to the object:

      • cluster.sigs.k8s.io/cluster-name

        Cluster object name that this MultiRackCluster object is applied to. To enable the use of BGP announcement for the cluster API LB address, set the useBGPAnnouncement parameter in the Cluster object to true:

        spec:
          providerSpec:
            value:
              useBGPAnnouncement: true
        
      • kaas.mirantis.com/provider

        Provider name that is baremetal.

      • kaas.mirantis.com/region

        Region name.

        Note

        The kaas.mirantis.com/region label is removed from all Container Cloud objects in 2.26.0 (Cluster releases 17.1.0 and 16.1.0). Therefore, do not add the label starting these releases. On existing clusters updated to these releases, or if manually added, this label will be ignored by Container Cloud.

      Warning

      Labels and annotations that are not documented in this API Reference are generated automatically by Container Cloud. Do not modify them using the Container Cloud API.

The MultiRackCluster metadata configuration example:

apiVersion: ipam.mirantis.com/v1alpha1
kind: MultiRackCluster
metadata:
  name: multirack-test-cluster
  namespace: managed-ns
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
MultiRackCluster spec

The spec field of the MultiRackCluster resource describes the desired state of the object. It contains the following fields:

  • bgpdConfigFileName

    Name of the configuration file for the BGP daemon (bird). Recommended value is bird.conf.

  • bgpdConfigFilePath

    Path to the directory where the configuration file for the BGP daemon (bird) is added. The recommended value is /etc/bird.

  • bgpdConfigTemplate

    Optional. Configuration text file template for the BGP daemon (bird) configuration file where you can use go template constructs and the following variables:

    • RouterID, LocalIP

      Local IP on the given network, which is a key in the Rack.spec.peeringMap dictionary, for a given node. You can use it, for example, in the router id {{$.RouterID}}; instruction.

    • LocalASN

      Local AS number.

    • NeighborASN

      Neighbor AS number.

    • NeighborIP

      Neighbor IP address. Its values are taken from Rack.spec.peeringMap, it can be used only inside the range iteration through the Neighbors list.

    • Neighbors

      List of peers in the given network and node. It can be iterated through the range statement in the go template.

    Values for LocalASN and NeighborASN are taken from:

    • MultiRackCluster.defaultPeer - if not used as a field inside the range iteration through the Neighbors list.

    • Corresponding values of Rack.spec.peeringMap - if used as a field inside the range iteration through the Neighbors list.

    This template can be overridden using the Rack objects. For details, see Rack spec.

  • defaultPeer

    Configuration parameters for the default BGP peer. These parameters will be used in rendering of the configuration file for BGP daemon from the template if they are not overridden for a particular rack or network using Rack objects. For details, see Rack spec.

    • localASN

      Mandatory. Local AS number.

    • neighborASN

      Mandatory. Neighbor AS number.

    • neighborIP

      Reserved. Neighbor IP address. Leave it as an empty string.

    • password

      Optional. Neighbor password. If not set, you can hardcode it in bgpdConfigTemplate. It is required for MD5 authentication between BGP peers.

Configuration examples:

Since Cluster releases 17.1.0 and 16.1.0 for bird v2.x
spec:
  bgpdConfigFileName: bird.conf
  bgpdConfigFilePath: /etc/bird
  bgpdConfigTemplate: |
    protocol device {
    }
    #
    protocol direct {
      interface "lo";
      ipv4;
    }
    #
    protocol kernel {
      ipv4 {
        export all;
      };
    }
    #
    {{range $i, $peer := .Neighbors}}
    protocol bgp 'bgp_peer_{{$i}}' {
      local port 1179 as {{.LocalASN}};
      neighbor {{.NeighborIP}} as {{.NeighborASN}};
      ipv4 {
        import none;
        export filter {
          if dest = RTD_UNREACHABLE then {
            reject;
          }
          accept;
        };
      };
    }
    {{end}}
  defaultPeer:
    localASN: 65101
    neighborASN: 65100
    neighborIP: ""
Before Cluster releases 17.1.0 and 16.1.0 for bird v1.x
spec:
  bgpdConfigFileName: bird.conf
  bgpdConfigFilePath: /etc/bird
  bgpdConfigTemplate: |
    listen bgp port 1179;
    protocol device {
    }
    #
    protocol direct {
      interface "lo";
    }
    #
    protocol kernel {
      export all;
    }
    #
    {{range $i, $peer := .Neighbors}}
    protocol bgp 'bgp_peer_{{$i}}' {
      local as {{.LocalASN}};
      neighbor {{.NeighborIP}} as {{.NeighborASN}};
      import all;
      export filter {
        if dest = RTD_UNREACHABLE then {
          reject;
        }
        accept;
      };
    }
    {{end}}
  defaultPeer:
    localASN: 65101
    neighborASN: 65100
    neighborIP: ""
MultiRackCluster status

The status field of the MultiRackCluster resource reflects the actual state of the MultiRackCluster object and contains the following fields:

  • state Since 2.23.0

    Message that reflects the current status of the resource. The list of possible values includes the following:

    • OK - object is operational.

    • ERR - object is non-operational. This status has a detailed description in the messages list.

    • TERM - object was deleted and is terminating.

  • messages Since 2.23.0

    List of error or warning messages if the object state is ERR.

  • objCreated

    Date, time, and IPAM version of the resource creation.

  • objStatusUpdated

    Date, time, and IPAM version of the last update of the status field in the resource.

  • objUpdated

    Date, time, and IPAM version of the last resource update.

Configuration example:

status:
  checksums:
    annotations: sha256:38e0b9de817f645c4bec37c0d4a3e58baecccb040f5718dc069a72c7385a0bed
    labels: sha256:d8f8eacf487d57c22ca0ace29bd156c66941a373b5e707d671dc151959a64ce7
    spec: sha256:66b5d28215bdd36723fe6230359977fbede828906c6ae96b5129a972f1fa51e9
  objCreated: 2023-08-11T12:25:21.00000Z  by  v6.5.999-20230810-155553-2497818
  objStatusUpdated: 2023-08-11T12:32:58.11966Z  by  v6.5.999-20230810-155553-2497818
  objUpdated: 2023-08-11T12:32:57.32036Z  by  v6.5.999-20230810-155553-2497818
  state: OK
MultiRackCluster and Rack usage examples

The following configuration examples of several bare metal objects illustrate how to configure BGP announcement of the load balancer address used to expose the cluster API.

Single rack configuration example

In the following example, all master nodes are in a single rack. One Rack object is required in this case for master nodes. Some worker nodes can coexist in the same rack with master nodes or occupy separate racks. It is implied that the useBGPAnnouncement parameter is set to true in the corresponding Cluster object.

Configuration example for MultiRackCluster

Since Cluster releases 17.1.0 and 16.1.0 for bird v2.x:

apiVersion: ipam.mirantis.com/v1alpha1
kind: MultiRackCluster
metadata:
  name: multirack-test-cluster
  namespace: managed-ns
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
    kaas.mirantis.com/region: region-one
spec:
  bgpdConfigFileName: bird.conf
  bgpdConfigFilePath: /etc/bird
  bgpdConfigTemplate: |
    protocol device {
    }
    #
    protocol direct {
      interface "lo";
      ipv4;
    }
    #
    protocol kernel {
      ipv4 {
        export all;
      };
    }
    #
    {{range $i, $peer := .Neighbors}}
    protocol bgp 'bgp_peer_{{$i}}' {
      local port 1179 as {{.LocalASN}};
      neighbor {{.NeighborIP}} as {{.NeighborASN}};
      ipv4 {
        import none;
        export filter {
          if dest = RTD_UNREACHABLE then {
            reject;
          }
          accept;
        };
      };
    }
    {{end}}
  defaultPeer:
    localASN: 65101
    neighborASN: 65100
    neighborIP: ""

Before Cluster releases 17.1.0 and 16.1.0 for bird v1.x:

apiVersion: ipam.mirantis.com/v1alpha1
kind: MultiRackCluster
metadata:
  name: multirack-test-cluster
  namespace: managed-ns
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
spec:
  bgpdConfigFileName: bird.conf
  bgpdConfigFilePath: /etc/bird
  bgpdConfigTemplate: |
    listen bgp port 1179;
    protocol device {
    }
    #
    protocol direct {
      interface "lo";
    }
    #
    protocol kernel {
      export all;
    }
    #
    {{range $i, $peer := .Neighbors}}
    protocol bgp 'bgp_peer_{{$i}}' {
      local as {{.LocalASN}};
      neighbor {{.NeighborIP}} as {{.NeighborASN}};
      import all;
      export filter {
        if dest = RTD_UNREACHABLE then {
          reject;
        }
        accept;
      };
    }
    {{end}}
  defaultPeer:
    localASN: 65101
    neighborASN: 65100
    neighborIP: ""
Configuration example for Rack
apiVersion: ipam.mirantis.com/v1alpha1
kind: Rack
metadata:
  name: rack-master
  namespace: managed-ns
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
spec:
  peeringMap:
    lcm-rack-control:
      peers:
      - neighborIP: 10.77.31.1  # "localASN" and "neighborASN" are taken from
      - neighborIP: 10.77.37.1  # "MultiRackCluster.spec.defaultPeer"
                                # if not set here
Configuration example for Machine
# "Machine" templates for "test-cluster-master-2" and "test-cluster-master-3"
# differ only in BMH selectors in this example.
apiVersion: cluster.k8s.io/v1alpha1
kind: Machine
metadata:
  name: test-cluster-master-1
  namespace: managed-ns
  annotations:
    metal3.io/BareMetalHost: managed-ns/test-cluster-master-1
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    cluster.sigs.k8s.io/control-plane: controlplane
    hostlabel.bm.kaas.mirantis.com/controlplane: controlplane
    ipam/RackRef: rack-master # used to connect "IpamHost" to "Rack" objects, so that
                              # BGP parameters can be obtained from "Rack" to
                              # render BGP configuration for the given "IpamHost" object
    kaas.mirantis.com/provider: baremetal
spec:
  providerSpec:
    value:
      kind: BareMetalMachineProviderSpec
      apiVersion: baremetal.k8s.io/v1alpha1
      hostSelector:
        matchLabels:
          kaas.mirantis.com/baremetalhost-id: test-cluster-master-1
      l2TemplateSelector:
        name: test-cluster-master

Note

Before update of the management cluster to Container Cloud 2.29.0 (Cluster release 16.4.0), instead of BareMetalHostInventory, use the BareMetalHost object. For details, see BareMetalHost.

Caution

While the Cluster release of the management cluster is 16.4.0, BareMetalHostInventory operations are allowed to m:kaas@management-admin only. Once the management cluster is updated to the Cluster release 16.4.1 (or later), this limitation will be lifted.

Configuration example for L2Template
apiVersion: ipam.mirantis.com/v1alpha1
kind: L2Template
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
  name: test-cluster-master
  namespace: managed-ns
spec:
  ...
  l3Layout:
    - subnetName: lcm-rack-control # this network is referenced in "rack-master" Rack
      scope:      namespace
  ...
  npTemplate: |
    ...
    ethernets:
      lo:
        addresses:
          - {{ cluster_api_lb_ip }}  # function for cluster API LB IP
        dhcp4: false
        dhcp6: false
    ...

After the objects are created and nodes are provisioned, the IpamHost objects will have BGP daemon configuration files in their status fields. For example:

Configuration example for IpamHost
apiVersion: ipam.mirantis.com/v1alpha1
kind: IpamHost
...
status:
  ...
  netconfigFiles:
  - content: bGlzdGVuIGJncCBwb3J0IDExNzk7CnByb3RvY29sIGRldmljZSB7Cn0KIwpwcm90b2NvbCBkaXJlY3QgewogIGludGVyZmFjZSAibG8iOwp9CiMKcHJvdG9jb2wga2VybmVsIHsKICBleHBvcnQgYWxsOwp9CiMKCnByb3RvY29sIGJncCAnYmdwX3BlZXJfMCcgewogIGxvY2FsIGFzIDY1MTAxOwogIG5laWdoYm9yIDEwLjc3LjMxLjEgYXMgNjUxMDA7CiAgaW1wb3J0IGFsbDsKICBleHBvcnQgZmlsdGVyIHsKICAgIGlmIGRlc3QgPSBSVERfVU5SRUFDSEFCTEUgdGhlbiB7CiAgICAgIHJlamVjdDsKICAgIH0KICAgIGFjY2VwdDsKICB9Owp9Cgpwcm90b2NvbCBiZ3AgJ2JncF9wZWVyXzEnIHsKICBsb2NhbCBhcyA2NTEwMTsKICBuZWlnaGJvciAxMC43Ny4zNy4xIGFzIDY1MTAwOwogIGltcG9ydCBhbGw7CiAgZXhwb3J0IGZpbHRlciB7CiAgICBpZiBkZXN0ID0gUlREX1VOUkVBQ0hBQkxFIHRoZW4gewogICAgICByZWplY3Q7CiAgICB9CiAgICBhY2NlcHQ7CiAgfTsKfQoK
    path: /etc/bird/bird.conf
  - content: ...
    path: /etc/netplan/60-kaas-lcm-netplan.yaml
  netconfigFilesStates:
    /etc/bird/bird.conf: 'OK: 2023-08-17T08:00:58.96140Z 25cde040e898fd5bf5b28aacb12f046b4adb510570ecf7d7fa5a8467fa4724ec'
    /etc/netplan/60-kaas-lcm-netplan.yaml: 'OK: 2023-08-11T12:33:24.54439Z 37ac6e9fe13e5969f35c20c615d96b4ed156341c25e410e95831794128601e01'
  ...

You can decode /etc/bird/bird.conf contents and verify the configuration:

echo "<<base64-string>>" | base64 -d

The following system output applies to the above configuration examples:

Configuration example for the decoded bird.conf

Since Cluster releases 17.1.0 and 16.1.0 for bird v2.x:

protocol device {
}
#
protocol direct {
  interface "lo";
  ipv4;
}
#
protocol kernel {
  ipv4 {
    export all;
  };
}
#

protocol bgp 'bgp_peer_0' {
  local port 1179 as 65101;
  neighbor 10.77.31.1 as 65100;
  ipv4 {
    import none;
    export filter {
      if dest = RTD_UNREACHABLE then {
        reject;
      }
      accept;
    };
  };
}

protocol bgp 'bgp_peer_1' {
  local port 1179 as 65101;
  neighbor 10.77.37.1 as 65100;
  ipv4 {
    import none;
    export filter {
      if dest = RTD_UNREACHABLE then {
        reject;
      }
      accept;
    };
  };
}

Before Cluster releases 17.1.0 and 16.1.0 for bird v1.x:

listen bgp port 1179;
protocol device {
}
#
protocol direct {
  interface "lo";
}
#
protocol kernel {
  export all;
}
#

protocol bgp 'bgp_peer_0' {
  local as 65101;
  neighbor 10.77.31.1 as 65100;
  import all;
  export filter {
    if dest = RTD_UNREACHABLE then {
      reject;
    }
    accept;
  };
}

protocol bgp 'bgp_peer_1' {
  local as 65101;
  neighbor 10.77.37.1 as 65100;
  import all;
  export filter {
    if dest = RTD_UNREACHABLE then {
      reject;
    }
    accept;
  };
}

BGP daemon configuration files are copied from IpamHost.status to the corresponding LCMMachine object the same way as it is done for netplan configuration files. Then, the configuration files are written to the corresponding node by the LCM-Agent.

Multiple rack configuration example

In the following configuration example, each master node is located in its own rack. Three Rack objects are required in this case for master nodes. Some worker nodes can coexist in the same racks with master nodes or occupy separate racks. Only objects that are required to show configuration for BGP announcement of the cluster API load balancer address are provided here.

For the description of Rack, MetalLBConfig, and other objects that are required for MetalLB configuration in this scenario, refer to Configuration example for using BGP announcement.

It is implied that the useBGPAnnouncement parameter is set to true in the corresponding Cluster object.

Configuration example for MultiRackCluster

Since Cluster releases 17.1.0 and 16.1.0 for bird v2.x:

# It is the same object as in the single rack example.
apiVersion: ipam.mirantis.com/v1alpha1
kind: MultiRackCluster
metadata:
  name: multirack-test-cluster
  namespace: managed-ns
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
    kaas.mirantis.com/region: region-one
spec:
  bgpdConfigFileName: bird.conf
  bgpdConfigFilePath: /etc/bird
  bgpdConfigTemplate: |
    protocol device {
    }
    #
    protocol direct {
      interface "lo";
      ipv4;
    }
    #
    protocol kernel {
      ipv4 {
        export all;
      };
    }
    #
    {{range $i, $peer := .Neighbors}}
    protocol bgp 'bgp_peer_{{$i}}' {
      local port 1179 as {{.LocalASN}};
      neighbor {{.NeighborIP}} as {{.NeighborASN}};
      ipv4 {
        import none;
        export filter {
          if dest = RTD_UNREACHABLE then {
            reject;
          }
          accept;
        };
      };
    }
    {{end}}
  defaultPeer:
    localASN: 65101
    neighborASN: 65100
    neighborIP: ""

Before Cluster releases 17.1.0 and 16.1.0 for bird v1.x:

# It is the same object as in the single rack example.
apiVersion: ipam.mirantis.com/v1alpha1
kind: MultiRackCluster
metadata:
  name: multirack-test-cluster
  namespace: managed-ns
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
spec:
  bgpdConfigFileName: bird.conf
  bgpdConfigFilePath: /etc/bird
  bgpdConfigTemplate: |
    listen bgp port 1179;
    protocol device {
    }
    #
    protocol direct {
      interface "lo";
    }
    #
    protocol kernel {
      export all;
    }
    #
    {{range $i, $peer := .Neighbors}}
    protocol bgp 'bgp_peer_{{$i}}' {
      local as {{.LocalASN}};
      neighbor {{.NeighborIP}} as {{.NeighborASN}};
      import all;
      export filter {
        if dest = RTD_UNREACHABLE then {
          reject;
        }
        accept;
      };
    }
    {{end}}
  defaultPeer:
    localASN: 65101
    neighborASN: 65100
    neighborIP: ""

The following Rack objects differ in neighbor IP addresses and in the network (L3 subnet) used for BGP connection to announce the cluster API LB IP and for cluster API traffic.

Configuration example for Rack 1
apiVersion: ipam.mirantis.com/v1alpha1
kind: Rack
metadata:
  name: rack-master-1
  namespace: managed-ns
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
spec:
  peeringMap:
    lcm-rack-control-1:
      peers:
      - neighborIP: 10.77.31.2  # "localASN" and "neighborASN" are taken from
      - neighborIP: 10.77.31.3  # "MultiRackCluster.spec.defaultPeer" if
                                # not set here
Configuration example for Rack 2
apiVersion: ipam.mirantis.com/v1alpha1
kind: Rack
metadata:
  name: rack-master-2
  namespace: managed-ns
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
spec:
  peeringMap:
    lcm-rack-control-2:
      peers:
      - neighborIP: 10.77.32.2  # "localASN" and "neighborASN" are taken from
      - neighborIP: 10.77.32.3  # "MultiRackCluster.spec.defaultPeer" if
                                # not set here
Configuration example for Rack 3
apiVersion: ipam.mirantis.com/v1alpha1
kind: Rack
metadata:
  name: rack-master-3
  namespace: managed-ns
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
spec:
  peeringMap:
    lcm-rack-control-3:
      peers:
      - neighborIP: 10.77.33.2  # "localASN" and "neighborASN" are taken from
      - neighborIP: 10.77.33.3  # "MultiRackCluster.spec.defaultPeer" if
                                # not set here

As compared to single rack examples, the following Machine objects differ in:

  • BMH selectors

  • L2Template selectors

  • Rack selectors (the ipam/RackRef label)

  • The rack-id node labels

    The labels on master nodes are required for MetalLB node selectors if MetalLB is used to announce LB IP addresses on master nodes. In this scenario, the L2 (ARP) announcement mode cannot be used for MetalLB because master nodes are in different L2 segments. So, the BGP announcement mode must be used for MetalLB. Node selectors are required to properly configure BGP connections from each master node.

Note

Before update of the management cluster to Container Cloud 2.29.0 (Cluster release 16.4.0), instead of BareMetalHostInventory, use the BareMetalHost object. For details, see BareMetalHost.

Caution

While the Cluster release of the management cluster is 16.4.0, BareMetalHostInventory operations are allowed to m:kaas@management-admin only. Once the management cluster is updated to the Cluster release 16.4.1 (or later), this limitation will be lifted.

Configuration example for Machine 1
apiVersion: cluster.k8s.io/v1alpha1
kind: Machine
metadata:
  name: test-cluster-master-1
  namespace: managed-ns
  annotations:
    metal3.io/BareMetalHost: managed-ns/test-cluster-master-1
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    cluster.sigs.k8s.io/control-plane: controlplane
    hostlabel.bm.kaas.mirantis.com/controlplane: controlplane
    ipam/RackRef: rack-master-1
    kaas.mirantis.com/provider: baremetal
spec:
  providerSpec:
    value:
      kind: BareMetalMachineProviderSpec
      apiVersion: baremetal.k8s.io/v1alpha1
      hostSelector:
        matchLabels:
          kaas.mirantis.com/baremetalhost-id: test-cluster-master-1
      l2TemplateSelector:
        name: test-cluster-master-1
      nodeLabels:             # not used for BGP announcement of the
      - key: rack-id          # cluster API LB IP but can be used for
        value: rack-master-1  # MetalLB if "nodeSelectors" are required
Configuration example for Machine 2
apiVersion: cluster.k8s.io/v1alpha1
kind: Machine
metadata:
  name: test-cluster-master-2
  namespace: managed-ns
  annotations:
    metal3.io/BareMetalHost: managed-ns/test-cluster-master-2
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    cluster.sigs.k8s.io/control-plane: controlplane
    hostlabel.bm.kaas.mirantis.com/controlplane: controlplane
    ipam/RackRef: rack-master-2
    kaas.mirantis.com/provider: baremetal
spec:
  providerSpec:
    value:
      kind: BareMetalMachineProviderSpec
      apiVersion: baremetal.k8s.io/v1alpha1
      hostSelector:
        matchLabels:
          kaas.mirantis.com/baremetalhost-id: test-cluster-master-2
      l2TemplateSelector:
        name: test-cluster-master-2
      nodeLabels:             # not used for BGP announcement of the
      - key: rack-id          # cluster API LB IP but can be used for
        value: rack-master-2  # MetalLB if "nodeSelectors" are required
Configuration example for Machine 3
apiVersion: cluster.k8s.io/v1alpha1
kind: Machine
metadata:
  name: test-cluster-master-3
  namespace: managed-ns
  annotations:
    metal3.io/BareMetalHost: managed-ns/test-cluster-master-3
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    cluster.sigs.k8s.io/control-plane: controlplane
    hostlabel.bm.kaas.mirantis.com/controlplane: controlplane
    ipam/RackRef: rack-master-3
    kaas.mirantis.com/provider: baremetal
spec:
  providerSpec:
    value:
      kind: BareMetalMachineProviderSpec
      apiVersion: baremetal.k8s.io/v1alpha1
      hostSelector:
        matchLabels:
          kaas.mirantis.com/baremetalhost-id: test-cluster-master-3
      l2TemplateSelector:
        name: test-cluster-master-3
      nodeLabels:             # optional. not used for BGP announcement of
      - key: rack-id          # the cluster API LB IP but can be used for
        value: rack-master-3  # MetalLB if "nodeSelectors" are required
Configuration example for Subnet defining the cluster API LB IP address
apiVersion: ipam.mirantis.com/v1alpha1
kind: Subnet
metadata:
  name: test-cluster-api-lb
  namespace: managed-ns
  labels:
    kaas.mirantis.com/provider: baremetal
    ipam/SVC-LBhost: "1"
    cluster.sigs.k8s.io/cluster-name: test-cluster
spec:
  cidr: 134.33.24.201/32
  useWholeCidr: true
Configuration example for Subnet of the LCM network in the rack-master-1 rack
apiVersion: ipam.mirantis.com/v1alpha1
kind: Subnet
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
  name: lcm-rack-control-1
  namespace: managed-ns
spec:
  cidr: 10.77.31.0/28
  gateway: 10.77.31.1
  includeRanges:
    - 10.77.31.4-10.77.31.13
  nameservers:
    - 1.2.3.4
Configuration example for Subnet of the LCM network in the rack-master-2 rack
apiVersion: ipam.mirantis.com/v1alpha1
kind: Subnet
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
  name: lcm-rack-control-2
  namespace: managed-ns
spec:
  cidr: 10.77.32.0/28
  gateway: 10.77.32.1
  includeRanges:
    - 10.77.32.4-10.77.32.13
  nameservers:
    - 1.2.3.4
Configuration example for Subnet of the LCM network in the rack-master-3 rack
apiVersion: ipam.mirantis.com/v1alpha1
kind: Subnet
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
  name: lcm-rack-control-3
  namespace: managed-ns
spec:
  cidr: 10.77.33.0/28
  gateway: 10.77.33.1
  includeRanges:
    - 10.77.33.4-10.77.33.13
  nameservers:
    - 1.2.3.4

The following L2Template objects differ in LCM and external subnets that each master node uses.

Configuration example for L2Template 1
apiVersion: ipam.mirantis.com/v1alpha1
kind: L2Template
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
  name: test-cluster-master-1
  namespace: managed-ns
spec:
  ...
  l3Layout:
    - subnetName: lcm-rack-control-1  # this network is referenced
      scope:      namespace           # in the "rack-master-1" Rack
    - subnetName: ext-rack-control-1  # this optional network is used for
      scope:      namespace           # Kubernetes services traffic and
                                      # MetalLB BGP connections
  ...
  npTemplate: |
    ...
    ethernets:
      lo:
        addresses:
          - {{ cluster_api_lb_ip }}  # function for cluster API LB IP
        dhcp4: false
        dhcp6: false
    ...
Configuration example for L2Template 2
apiVersion: ipam.mirantis.com/v1alpha1
kind: L2Template
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
  name: test-cluster-master-2
  namespace: managed-ns
spec:
  ...
  l3Layout:
    - subnetName: lcm-rack-control-2  # this network is referenced
      scope:      namespace           # in "rack-master-2" Rack
    - subnetName: ext-rack-control-2  # this network is used for Kubernetes services
      scope:      namespace           # traffic and MetalLB BGP connections
  ...
  npTemplate: |
    ...
    ethernets:
      lo:
        addresses:
          - {{ cluster_api_lb_ip }}  # function for cluster API LB IP
        dhcp4: false
        dhcp6: false
    ...
Configuration example for L2Template 3
apiVersion: ipam.mirantis.com/v1alpha1
kind: L2Template
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal
  name: test-cluster-master-3
  namespace: managed-ns
spec:
  ...
  l3Layout:
    - subnetName: lcm-rack-control-3  # this network is referenced
      scope:      namespace           # in "rack-master-3" Rack
    - subnetName: ext-rack-control-3  # this network is used for Kubernetes services
      scope:      namespace           # traffic and MetalLB BGP connections
  ...
  npTemplate: |
    ...
    ethernets:
      lo:
        addresses:
          - {{ cluster_api_lb_ip }}  # function for cluster API LB IP
        dhcp4: false
        dhcp6: false
    ...

The following MetalLBConfig example illustrates how node labels are used in nodeSelectors of bgpPeers. Each of bgpPeers corresponds to one of master nodes.

Configuration example for MetalLBConfig
apiVersion: ipam.mirantis.com/v1alpha1
kind: MetalLBConfig
metadata:
labels:
  cluster.sigs.k8s.io/cluster-name: test-cluster
  kaas.mirantis.com/provider: baremetal
name: test-cluster-metallb-config
namespace: managed-ns
spec:
  ...
  bgpPeers:
    - name: svc-peer-rack1
      spec:
        holdTime: 0s
        keepaliveTime: 0s
        peerAddress: 10.77.41.1 # peer address is in external subnet
                                # instead of LCM subnet used for BGP
                                # connection to announce cluster API LB IP
        peerASN: 65100  # the same as for BGP connection used to announce
                        # cluster API LB IP
        myASN: 65101    # the same as for BGP connection used to announce
                        # cluster API LB IP
        nodeSelectors:
          - matchLabels:
              rack-id: rack-master-1  # references the node corresponding
                                      # to "test-cluster-master-1" Machine
    - name: svc-peer-rack2
      spec:
        holdTime: 0s
        keepaliveTime: 0s
        peerAddress: 10.77.42.1
        peerASN: 65100
        myASN: 65101
        nodeSelectors:
          - matchLabels:
              rack-id: rack-master-1
    - name: svc-peer-rack3
      spec:
        holdTime: 0s
        keepaliveTime: 0s
        peerAddress: 10.77.43.1
        peerASN: 65100
        myASN: 65101
        nodeSelectors:
          - matchLabels:
              rack-id: rack-master-1
  ...

After the objects are created and nodes are provisioned, the IpamHost objects will have BGP daemon configuration files in their status fields. Refer to Single rack configuration example on how to verify the BGP configuration files.

Rack

TechPreview Available since 2.24.4

This section describes the Rack resource used in the Container Cloud API.

When you create a bare metal managed cluster with a multi-rack topology, where Kubernetes masters are distributed across multiple racks without L2 layer extension between them, the Rack resource allows you to configure BGP announcement of the cluster API load balancer address from each rack.

In this scenario, Rack objects must be bound to Machine objects corresponding to master nodes of the cluster. Each Rack object describes the configuration of the BGP daemon (bird) used to announce the cluster API LB address from a particular master node (or from several nodes in the same rack).

Rack objects are used for a particular cluster only in conjunction with the MultiRackCluster object described in MultiRackCluster.

For demonstration purposes, the Container Cloud Rack custom resource (CR) description is split into the following major sections:

For configuration examples, see MultiRackCluster and Rack usage examples.

Rack metadata

The Container Cloud Rack CR metadata contains the following fields:

  • apiVersion

    API version of the object that is ipam.mirantis.com/v1alpha1.

  • kind

    Object type that is Rack.

  • metadata

    The metadata field contains the following subfields:

    • name

      Name of the Rack object. Corresponding Machine objects must have their ipam/RackRef label value set to the name of the Rack object. This label is required only for Machine objects of the master nodes that announce the cluster API LB address.

    • namespace

      Container Cloud project (Kubernetes namespace) where the object was created.

    • labels

      Key-value pairs that are attached to the object:

      • cluster.sigs.k8s.io/cluster-name

        Cluster object name that this Rack object is applied to.

      • kaas.mirantis.com/provider

        Provider name that is baremetal.

      • kaas.mirantis.com/region

        Region name.

        Note

        The kaas.mirantis.com/region label is removed from all Container Cloud objects in 2.26.0 (Cluster releases 17.1.0 and 16.1.0). Therefore, do not add the label starting these releases. On existing clusters updated to these releases, or if manually added, this label will be ignored by Container Cloud.

      Warning

      Labels and annotations that are not documented in this API Reference are generated automatically by Container Cloud. Do not modify them using the Container Cloud API.

Rack metadata example:

apiVersion: ipam.mirantis.com/v1alpha1
kind: Rack
metadata:
  name: rack-1
  namespace: managed-ns
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    kaas.mirantis.com/provider: baremetal

Corresponding Machine metadata example:

apiVersion: cluster.k8s.io/v1alpha1
kind: Machine
metadata:
  labels:
    cluster.sigs.k8s.io/cluster-name: test-cluster
    cluster.sigs.k8s.io/control-plane: controlplane
    hostlabel.bm.kaas.mirantis.com/controlplane: controlplane
    ipam/RackRef: rack-1
    kaas.mirantis.com/provider: baremetal
  name: managed-master-1-control-efi-6tg52
  namespace: managed-ns
Rack spec

The spec field of the Rack resource describes the desired state of the object. It contains the following fields:

  • bgpdConfigTemplate

    Optional. Configuration file template that will be used to create configuration file for a BGP daemon on nodes in this rack. If not set, the configuration file template from the corresponding MultiRackCluster object is used.

  • peeringMap

    Structure that describes general parameters of BGP peers to be used in the configuration file for a BGP daemon for each network where BGP announcement is used. Also, you can define a separate configuration file template for the BGP daemon for each of those networks. The peeringMap structure is as follows:

    peeringMap:
      <network-name-a>:
        peers:
          - localASN: <localASN-1>
            neighborASN: <neighborASN-1>
            neighborIP: <neighborIP-1>
            password: <password-1>
          - localASN: <localASN-2>
            neighborASN: <neighborASN-2>
            neighborIP: <neighborIP-2>
            password: <password-2>
        bgpdConfigTemplate: |
        <configuration file template for a BGP daemon>
      ...
    
    • <network-name-a>

      Name of the network where a BGP daemon should connect to the neighbor BGP peers. By default, it is implied that the same network is used on the node to make connection to the neighbor BGP peers as well as to receive and respond to the traffic directed to the IP address being advertised. In our scenario, the advertised IP address is the cluster API LB IP address.

      This network name must be the same as the subnet name used in the L2 template (l3Layout section) for the corresponding master node(s).

    • peers

      Optional. List of dictionaries where each dictionary defines configuration parameters for a particular BGP peer. Peer parameters are as follows:

      • localASN

        Optional. Local AS number. If not set, it can be taken from MultiRackCluster.spec.defaultPeer or can be hardcoded in bgpdConfigTemplate.

      • neighborASN

        Optional. Neighbor AS number. If not set, it can be taken from MultiRackCluster.spec.defaultPeer or can be hardcoded in bgpdConfigTemplate.

      • neighborIP

        Mandatory. Neighbor IP address.

      • password

        Optional. Neighbor password. If not set, it can be taken from MultiRackCluster.spec.defaultPeer or can be hardcoded in bgpdConfigTemplate. It is required when MD5 authentication between BGP peers is used.

    • bgpdConfigTemplate

      Optional. Configuration file template that will be used to create the configuration file for the BGP daemon of the network-name-a network on a particular node. If not set, Rack.spec.bgpdConfigTemplate is used.

Configuration example:

Since Cluster releases 17.1.0 and 16.1.0 for bird v2.x
spec:
  bgpdConfigTemplate: |
    protocol device {
    }
    #
    protocol direct {
      interface "lo";
      ipv4;
    }
    #
    protocol kernel {
      ipv4 {
        export all;
      };
    }
    #
    protocol bgp bgp_lcm {
      local port 1179 as {{.LocalASN}};
      neighbor {{.NeighborIP}} as {{.NeighborASN}};
      ipv4 {
         import none;
         export filter {
           if dest = RTD_UNREACHABLE then {
             reject;
           }
           accept;
         };
      };
    }
  peeringMap:
    lcm-rack1:
      peers:
      - localASN: 65050
        neighborASN: 65011
        neighborIP: 10.77.31.1
Before Cluster releases 17.1.0 and 16.1.0 for bird v1.x
spec:
  bgpdConfigTemplate: |
    listen bgp port 1179;
    protocol device {
    }
    #
    protocol direct {
      interface "lo";
    }
    #
    protocol kernel {
      export all;
    }
    #
    protocol bgp bgp_lcm {
      local as {{.LocalASN}};
      neighbor {{.NeighborIP}} as {{.NeighborASN}};
      import all;
      export filter {
        if dest = RTD_UNREACHABLE then {
          reject;
        }
        accept;
      };
    }
  peeringMap:
    lcm-rack1:
      peers:
      - localASN: 65050
        neighborASN: 65011
        neighborIP: 10.77.31.1
Rack status

The status field of the Rack resource reflects the actual state of the Rack object and contains the following fields:

  • state Since 2.23.0

    Message that reflects the current status of the resource. The list of possible values includes the following:

    • OK - object is operational.

    • ERR - object is non-operational. This status has a detailed description in the messages list.

    • TERM - object was deleted and is terminating.

  • messages Since 2.23.0

    List of error or warning messages if the object state is ERR.

  • objCreated

    Date, time, and IPAM version of the resource creation.

  • objStatusUpdated

    Date, time, and IPAM version of the last update of the status field in the resource.

  • objUpdated

    Date, time, and IPAM version of the last resource update.

Configuration example:

status:
  checksums:
    annotations: sha256:cd4b751d9773eacbfd5493712db0cbebd6df0762156aefa502d65a9d5e8af31d
    labels: sha256:fc2612d12253443955e1bf929f437245d304b483974ff02a165bc5c78363f739
    spec: sha256:8f0223b1eefb6a9cd583905a25822fd83ac544e62e1dfef26ee798834ef4c0c1
  objCreated: 2023-08-11T12:25:21.00000Z  by  v6.5.999-20230810-155553-2497818
  objStatusUpdated: 2023-08-11T12:33:00.92163Z  by  v6.5.999-20230810-155553-2497818
  objUpdated: 2023-08-11T12:32:59.11951Z  by  v6.5.999-20230810-155553-2497818
  state: OK
Subnet

This section describes the Subnet resource used in Mirantis Container Cloud API to allocate IP addresses for the cluster nodes.

For demonstration purposes, the Container Cloud Subnet custom resource (CR) can be split into the following major sections:

Subnet metadata

The Container Cloud Subnet CR contains the following fields:

  • apiVersion

    API version of the object that is ipam.mirantis.com/v1alpha1.

  • kind

    Object type that is Subnet

  • metadata

    This field contains the following subfields:

    • name

      Name of the Subnet object.

    • namespace

      Project in which the Subnet object was created.

    • labels

      Key-value pairs that are attached to the object:

      • ipam/DefaultSubnet: "1" Deprecated since 2.14.0

        Indicates that this subnet was automatically created for the PXE network.

      • ipam/UID

        Unique ID of a subnet.

      • kaas.mirantis.com/provider

        Provider type.

      • kaas.mirantis.com/region

        Region name.

        Note

        The kaas.mirantis.com/region label is removed from all Container Cloud objects in 2.26.0 (Cluster releases 17.1.0 and 16.1.0). Therefore, do not add the label starting these releases. On existing clusters updated to these releases, or if manually added, this label will be ignored by Container Cloud.

      Warning

      Labels and annotations that are not documented in this API Reference are generated automatically by Container Cloud. Do not modify them using the Container Cloud API.

Configuration example:

apiVersion: ipam.mirantis.com/v1alpha1
kind: Subnet
metadata:
  name: kaas-mgmt
  namespace: default
  labels:
    ipam/UID: 1bae269c-c507-4404-b534-2c135edaebf5
    kaas.mirantis.com/provider: baremetal
Subnet spec

The spec field of the Subnet resource describes the desired state of a subnet. It contains the following fields:

  • cidr

    A valid IPv4 CIDR, for example, 10.11.0.0/24.

  • gateway

    A valid gateway address, for example, 10.11.0.9.

  • includeRanges

    A comma-separated list of IP address ranges within the given CIDR that should be used in the allocation of IPs for nodes. The gateway, network, broadcast, and DNSaddresses will be excluded (protected) automatically if they intersect with one of the range. The IPs outside the given ranges will not be used in the allocation. Each element of the list can be either an interval 10.11.0.5-10.11.0.70 or a single address 10.11.0.77.

    Warning

    Do not use values that are out of the given CIDR.

  • excludeRanges

    A comma-separated list of IP address ranges within the given CIDR that should not be used in the allocation of IPs for nodes. The IPs within the given CIDR but outside the given ranges will be used in the allocation. The gateway, network, broadcast, and DNS addresses will be excluded (protected) automatically if they are included in the CIDR. Each element of the list can be either an interval 10.11.0.5-10.11.0.70 or a single address 10.11.0.77.

    Warning

    Do not use values that are out of the given CIDR.

  • useWholeCidr

    If set to false (by default), the subnet address and broadcast address will be excluded from the address allocation. If set to true, the subnet address and the broadcast address are included into the address allocation for nodes.

  • nameservers

    The list of IP addresses of name servers. Each element of the list is a single address, for example, 172.18.176.6.

Configuration example:

spec:
  cidr: 172.16.48.0/24
  excludeRanges:
  - 172.16.48.99
  - 172.16.48.101-172.16.48.145
  gateway: 172.16.48.1
  nameservers:
  - 172.18.176.6
Subnet status

The status field of the Subnet resource describes the actual state of a subnet. It contains the following fields:

  • allocatable

    The number of IP addresses that are available for allocation.

  • allocatedIPs

    The list of allocated IP addresses in the IP:<IPAddr object UID> format.

  • capacity

    The total number of IP addresses to be allocated, including the sum of allocatable and already allocated IP addresses.

  • cidr

    The IPv4 CIDR for a subnet.

  • gateway

    The gateway address for a subnet.

  • nameservers

    The list of IP addresses of name servers.

  • ranges

    The list of IP address ranges within the given CIDR that are used in the allocation of IPs for nodes.

  • statusMessage

    Deprecated since Container Cloud 2.23.0 and will be removed in one of the following releases in favor of state and messages. Since Container Cloud 2.24.0, this field is not set for the subnets of newly created clusters. For the field description, see state.

  • state Since 2.23.0

    Message that reflects the current status of the resource. The list of possible values includes the following:

    • OK - object is operational.

    • ERR - object is non-operational. This status has a detailed description in the messages list.

    • TERM - object was deleted and is terminating.

  • messages Since 2.23.0

    List of error or warning messages if the object state is ERR.

  • objCreated

    Date, time, and IPAM version of the resource creation.

  • objStatusUpdated

    Date, time, and IPAM version of the last update of the status field in the resource.

  • objUpdated

    Date, time, and IPAM version of the last resource update.

Configuration example:

status:
  allocatable: 51
  allocatedIPs:
  - 172.16.48.200:24e94698-f726-11ea-a717-0242c0a85b02
  - 172.16.48.201:2bb62373-f726-11ea-a717-0242c0a85b02
  - 172.16.48.202:37806659-f726-11ea-a717-0242c0a85b02
  capacity: 54
  cidr: 172.16.48.0/24
  gateway: 172.16.48.1
  nameservers:
  - 172.18.176.6
  ranges:
  - 172.16.48.200-172.16.48.253
  objCreated: 2021-10-21T19:09:32Z  by  v5.1.0-20210930-121522-f5b2af8
  objStatusUpdated: 2021-10-21T19:14:18.748114886Z  by  v5.1.0-20210930-121522-f5b2af8
  objUpdated: 2021-10-21T19:09:32.606968024Z  by  v5.1.0-20210930-121522-f5b2af8
  state: OK
SubnetPool

Unsupported since 2.28.0 (17.3.0 and 16.3.0)

Warning

The SubnetPool object is unsupported since Container Cloud 2.28.0 (17.3.0 and 16.3.0). For details, see MOSK Deprecation Notes: SubnetPool resource management.

This section describes the SubnetPool resource used in Mirantis Container Cloud API to manage a pool of addresses from which subnets can be allocated.

For demonstration purposes, the Container Cloud SubnetPool custom resource (CR) is split into the following major sections:

SubnetPool metadata

The Container Cloud SubnetPool CR contains the following fields:

  • apiVersion

    API version of the object that is ipam.mirantis.com/v1alpha1.

  • kind

    Object type that is SubnetPool.

  • metadata

    The metadata field contains the following subfields:

    • name

      Name of the SubnetPool object.

    • namespace

      Project in which the SubnetPool object was created.

    • labels

      Key-value pairs that are attached to the object:

      • kaas.mirantis.com/provider

        Provider type that is baremetal.

      • kaas.mirantis.com/region

        Region name.

        Note

        The kaas.mirantis.com/region label is removed from all Container Cloud objects in 2.26.0 (Cluster releases 17.1.0 and 16.1.0). Therefore, do not add the label starting these releases. On existing clusters updated to these releases, or if manually added, this label will be ignored by Container Cloud.

      Warning

      Labels and annotations that are not documented in this API Reference are generated automatically by Container Cloud. Do not modify them using the Container Cloud API.

Configuration example:

apiVersion: ipam.mirantis.com/v1alpha1
kind: SubnetPool
metadata:
  name: kaas-mgmt
  namespace: default
  labels:
    kaas.mirantis.com/provider: baremetal
SubnetPool spec

The spec field of the SubnetPool resource describes the desired state of a subnet pool. It contains the following fields:

  • cidr

    Valid IPv4 CIDR. For example, 10.10.0.0/16.

  • blockSize

    IP address block size to use when assigning an IP address block to every new child Subnet object. For example, if you set /25, every new child Subnet will have 128 IPs to allocate. Possible values are from /29 to the cidr size. Immutable.

  • nameservers

    Optional. List of IP addresses of name servers to use for every new child Subnet object. Each element of the list is a single address, for example, 172.18.176.6. Default: empty.

  • gatewayPolicy

    Optional. Method of assigning a gateway address to new child Subnet objects. Default: none. Possible values are:

    • first - first IP of the IP address block assigned to a child Subnet, for example, 10.11.10.1.

    • last - last IP of the IP address block assigned to a child Subnet, for example, 10.11.10.254.

    • none - no gateway address.

Configuration example:

spec:
  cidr: 10.10.0.0/16
  blockSize: /25
  nameservers:
  - 172.18.176.6
  gatewayPolicy: first
SubnetPool status

The status field of the SubnetPool resource describes the actual state of a subnet pool. It contains the following fields:

  • allocatedSubnets

    List of allocated subnets. Each subnet has the <CIDR>:<SUBNET_UID> format.

  • blockSize

    Block size to use for IP address assignments from the defined pool.

  • capacity

    Total number of IP addresses to be allocated. Includes the number of allocatable and already allocated IP addresses.

  • allocatable

    Number of subnets with the blockSize size that are available for allocation.

  • state Since 2.23.0

    Message that reflects the current status of the resource. The list of possible values includes the following:

    • OK - object is operational.

    • ERR - object is non-operational. This status has a detailed description in the messages list.

    • TERM - object was deleted and is terminating.

  • messages Since 2.23.0

    List of error or warning messages if the object state is ERR.

  • objCreated

    Date, time, and IPAM version of the resource creation.

  • objStatusUpdated

    Date, time, and IPAM version of the last update of the status field in the resource.

  • objUpdated

    Date, time, and IPAM version of the last resource update.

Example:

status:
  allocatedSubnets:
  - 10.10.0.0/24:0272bfa9-19de-11eb-b591-0242ac110002
  blockSize: /24
  capacity: 54
  allocatable: 51
  objCreated: 2021-10-21T19:09:32Z  by  v5.1.0-20210930-121522-f5b2af8
  objStatusUpdated: 2021-10-21T19:14:18.748114886Z  by  v5.1.0-20210930-121522-f5b2af8
  objUpdated: 2021-10-21T19:09:32.606968024Z  by  v5.1.0-20210930-121522-f5b2af8
  state: OK

Release Compatibility Matrix

The Mirantis Container Cloud Release Compatibility Matrix outlines the specific operating environments that are validated and supported.

The document provides the deployment compatibility for each product release and determines the upgrade paths between major components versions when upgrading. The document also provides the Container Cloud browser compatibility.

A Container Cloud management cluster upgrades automatically when a new product release becomes available. Once the management cluster has been updated, the user may trigger the managed clusters upgrade through the Container Cloud web UI or API.

To view the full components list with their respective versions for each Container Cloud release, refer to the Container Cloud Release Notes related to the release version of your deployment or use the Releases section in the web UI or API.

Caution

The document applies to the Container Cloud regular deployments. For supported configurations of existing Mirantis Kubernetes Engine (MKE) clusters that are not deployed by Container Cloud, refer to MKE Compatibility Matrix.

Compatibility matrix of component versions

The following tables outline the compatibility matrices of the most recent major Container Cloud and Cluster releases along with patch releases and their component versions. For details about unsupported releases, see Releases summary.

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

Legend

Symbol

Definition

Cluster release is not included in the Container Cloud release yet.

Latest supported Cluster release to use for cluster deployment or update.

Deprecated Cluster release that you must update to the latest supported Cluster release. The deprecated Cluster release will become unsupported in one of the following Container Cloud releases. Greenfield deployments based on a deprecated Cluster release are not supported. Use the latest supported Cluster release instead.

Unsupported Cluster release that blocks automatic upgrade of a management cluster. Update the Cluster release to the latest supported one to unblock management cluster upgrade and obtain newest product features and enhancements.

Component is included in the Container Cloud release.

Component is available in the Technology Preview scope. Use it only for testing purposes on staging environments.

Component is unsupported in the Container Cloud release.

The following table outlines the compatibility matrix for the Container Cloud release series 2.29.x.

Container Cloud compatibility matrix 2.29.x

Release

Container Cloud

2.29.2 (current)

2.29.1

2.29.0

Release history

Release date

Apr 22, 2025

Mar 26, 2025

Mar 11, 2025

Major Cluster releases (managed)

17.4.0 +
MOSK 25.1
MKE 3.7.19

17.3.0 +
MOSK 24.3
MKE 3.7.12

17.2.0 +
MOSK 24.2
MKE 3.7.8

16.4.0
MKE 3.7.19

16.3.0
MKE 3.7.12

16.2.0
MKE 3.7.8

Patch Cluster releases (managed)

17.3.x + MOSK 24.3.x

17.3.7
17.3.6
17.3.5
17.3.4

17.3.6
17.3.5
17.3.4


17.3.5
17.3.4

17.2.x + MOSK 24.2.x

16.3.x

16.3.7
16.3.6
16.3.5
16.3.4
16.3.3

16.3.6
16.3.5
16.3.4
16.3.3


16.3.5
16.3.4
16.3.3

16.2.x

Fully managed cluster

Mirantis Kubernetes Engine (MKE)

3.7.20
17.3.7, 16.4.2, 16.3.7
3.7.20
17.3.6, 16.4.1, 16.3.6
3.7.19
17.4.0, 16.4.0

Container orchestration

Kubernetes

1.27 17.x, 16.x

1.27 17.x, 16.x

1.27 17.x, 16.x

Container runtime

Mirantis Container Runtime (MCR)

25.0.8 11
17.4.0, 16.4.0, 16.4.2
23.0.15 11
17.3.7, 16.3.7
25.0.8 11
17.4.0, 16.4.0, 16.4.1
23.0.15 11
17.3.6, 16.3.6
25.0.8
17.4.0, 16.4.0
23.0.15
17.3.5, 16.3.5

OS distributions

Ubuntu

22.04

22.04

22.04

Infrastructure platform

Bare metal 8

kernel 5.15.0-135-generic Jammy
kernel 5.15.0-134-generic Jammy
kernel 5.15.0-131-generic Jammy

MOSK Yoga or Antelope with OVS 3

OpenStack (Octavia)
Queens
Yoga
Antelope
Queens
Yoga
Antelope
Queens
Yoga
Antelope

Software defined storage

Ceph

18.2.4-14.cve
16.4.2
18.2.4-13.cve
17.3.7, 16.3.7
18.2.4-13.cve
17.3.6, 16.4.1, 16.3.6
18.2.4-12.cve
17.4.0, 16.4.0
18.2.4-11.cve
17.3.5, 16.3.5

Rook

1.14.10-29
16.4.2
1.13.5-29
17.3.7, 16.3.7
1.14.10-28
16.4.1
1.13.5-29
17.3.6, 16.3.6
1.14.10-26
17.4.0, 16.4.0
1.13.5-28
17.3.5, 16.3.5

Logging, monitoring, and alerting

StackLight


The following drop-down tables outline compatibility matrices for several previous releases of Container Cloud.

Container Cloud compatibility matrix 2.28.x

Release

Container Cloud

2.28.5

2.28.4

2.28.3

2.28.2

2.28.1

2.28.0

Release history

Release date

Feb 03, 2025

Jan 06, 2025

Dec 09, 2024

Nov 18, 2024

Oct 30, 2024

Oct 16, 2024

Major Cluster releases (managed)

17.3.0 +
MOSK 24.3
MKE 3.7.12

17.2.0 +
MOSK 24.2
MKE 3.7.8

17.1.0 +
MOSK 24.1
MKE 3.7.5

16.3.0
MKE 3.7.12

16.2.0
MKE 3.7.8

16.1.0
MKE 3.7.5

Patch Cluster releases (managed)

17.3.x + MOSK 24.3.x

17.3.5
17.3.4

17.3.4

17.2.x + MOSK 24.2.x

17.2.7
17.2.6
17.2.5
17.2.4
17.2.3
17.2.7
17.2.6
17.2.5
17.2.4
17.2.3

17.2.6
17.2.5
17.2.4
17.2.3


17.2.5
17.2.4
17.2.3



17.2.4
17.2.3

17.1.x + MOSK 24.1.x

16.3.x

16.3.5
16.3.4
16.3.3
16.3.2
16.3.1

16.3.4
16.3.3
16.3.2
16.3.1


16.3.3
16.3.2
16.3.1



16.3.2
16.3.1




16.3.1

16.2.x

16.2.7
16.2.6
16.2.5
16.2.4
16.2.3


16.2.7
16.2.6
16.2.5
16.2.4
16.2.3



16.2.6
16.2.5
16.2.4
16.2.3




16.2.5
16.2.4
16.2.3





16.2.4
16.2.3
16.2.2
16.2.1

16.1.x

Fully managed cluster

Mirantis Kubernetes Engine (MKE)

3.7.18
17.3.5, 16.3.5
3.7.17
17.3.4, 16.3.4
3.7.16
17.2.7, 16.3.3, 16.2.7
3.7.16
17.2.6, 16.3.2, 16.2.6
3.7.15
17.2.5, 16.3.1, 16.2.5
3.7.12
17.3.0, 16.3.0

Container orchestration

Kubernetes

1.27 17.x, 16.x

1.27 17.x, 16.x

1.27 17.x, 16.x

1.27 17.x, 16.x

1.27 17.x, 16.x

1.27 17.x, 16.x

Container runtime

Mirantis Container Runtime (MCR)

23.0.15
17.3.5, 16.3.5
23.0.14
17.3.0, 16.3.0
23.0.15
17.3.4, 16.3.4
23.0.14
17.3.0, 16.3.0
23.0.14
17.3.0, 16.3.x
23.0.11
17.2.7, 16.2.7
23.0.14
17.3.0, 16.3.x
23.0.11
17.2.6, 16.2.6
23.0.14
17.3.0, 16.3.x
23.0.11
17.2.5, 16.2.5
23.0.14
17.3.0, 16.3.0
23.0.11
17.2.4, 16.2.4

OS distributions

Ubuntu

22.04 9

22.04 9

22.04 9

22.04 9

22.04 9

22.04 9

Infrastructure platform

Bare metal 8

kernel 5.15.0-130-generic Jammy, Focal
kernel 5.15.0-126-generic Jammy, Focal
kernel 5.15.0-125-generic Jammy, Focal
kernel 5.15.0-124-generic Jammy, Focal
kernel 5.15.0-122-generic Jammy, Focal
kernel 5.15.0-119-generic Jammy, Focal

MOSK Yoga or Antelope with OVS 3

OpenStack (Octavia)
Queens
Yoga
Antelope
Queens
Yoga
Antelope
Queens
Yoga
Antelope
Queens
Yoga
Antelope
Queens
Yoga
Antelope
Queens
Yoga
Antelope

Software defined storage

Ceph

18.2.4-11.cve
17.3.5, 16.3.5
18.2.4-11.cve
17.3.4, 16.3.4
18.2.4-10.cve
17.2.7, 16.3.3, 16.2.7
18.2.4-8.cve
17.2.6, 16.3.2, 16.2.6
18.2.4-6.cve
16.3.1
18.2.4-7.cve
17.2.5, 16.2.5
18.2.4-6.cve
17.3.0, 16.3.0

Rook

1.13.5-28
17.3.5, 16.3.5
1.13.5-28
17.3.4, 16.3.4
1.13.5-26
17.2.7, 16.3.3, 16.2.7
1.13.5-23
17.2.6, 16.3.2, 16.2.6
1.13.5-21
16.3.1
1.13.5-22
17.2.5, 16.2.5
1.13.5-21
17.3.0, 16.3.0

Logging, monitoring, and alerting

StackLight

Container Cloud compatibility matrix 2.27.x

Release

Container Cloud

2.27.4

2.27.3

2.27.2

2.27.1

2.27.0

Release history

Release date

Sep 16, 2024

Aug 27, 2024

Aug 05, 2024

July 16, 2024

July 02, 2024

Major Cluster releases (managed)

17.2.0 +
MOSK 24.2
MKE 3.7.8

17.1.0 +
MOSK 24.1
MKE 3.7.5

16.2.0
MKE 3.7.8

16.1.0
MKE 3.7.5

Patch Cluster releases (managed)

17.2.x + MOSK 24.2.x

17.2.4
17.2.3

17.2.3

17.1.x + MOSK 24.1.x

17.1.7+24.1.7
17.1.6+24.1.6
17.1.5+24.1.5
17.1.7+24.1.7
17.1.6+24.1.6
17.1.5+24.1.5
17.1.7+24.1.7
17.1.6+24.1.6
17.1.5+24.1.5

17.1.6+24.1.6
17.1.5+24.1.5


17.1.5+24.1.5

16.2.x

16.2.4
16.2.3
16.2.2
16.2.1

16.2.3
16.2.2
16.2.1


16.2.2
16.2.1



16.2.1

16.1.x

16.1.7
16.1.6
16.1.5
16.1.7
16.1.6
16.1.5
16.1.7
16.1.6
16.1.5

16.1.6
16.1.5


16.1.5

Fully managed cluster

Mirantis Kubernetes Engine (MKE)

3.7.12
17.2.4, 16.2.4
3.7.12
17.2.3, 16.2.3
3.7.11
17.1.7, 16.2.2, 16.1.7
3.7.10
17.1.6, 16.2.1, 16.1.6
3.7.8
17.2.0, 16.2.0

Attached managed cluster

MKE 7

3.6.8
19.1.0
3.6.1
19.0.0
3.5.5
18.1.0
3.5.3
18.0.0
3.6.8
19.1.0
3.6.1
19.0.0
3.5.5
18.1.0
3.5.3
18.0.0
3.6.8
19.1.0
3.6.1
19.0.0
3.5.5
18.1.0
3.5.3
18.0.0

Container orchestration

Kubernetes

1.27 17.x, 16.x

1.27 17.x, 16.x

1.27 17.x, 16.x

1.27 17.x, 16.x

1.27 17.x, 16.x

Container runtime

Mirantis Container Runtime (MCR)

23.0.11 17.2.x, 16.2.x 10
23.0.9 17.1.x, 16.1.x 10
23.0.11 17.2.x, 16.2.x 10
23.0.9 17.1.x, 16.1.x 10
23.0.11 17.2.x, 16.2.x 10
23.0.9 17.1.x, 16.1.x 10
23.0.11 17.2.x, 16.2.x 10
23.0.9 17.1.x, 16.1.x 10

23.0.11 17.2.x, 16.2.x

OS distributions

Ubuntu

22.04 9
20.04
22.04 9
20.04
22.04 9
20.04
22.04 9
20.04
22.04 9
20.04

Infrastructure platform

Bare metal 8

kernel 5.15.0-119-generic Jammy
kernel 5.15.0-118-generic Focal
kernel 5.15.0-117-generic Jammy, Focal
kernel 5.15.0-116-generic Jammy
kernel 5.15.0-113-generic Focal
kernel 5.15.0-113-generic
kernel 5.15.0-107-generic

MOSK Yoga or Antelope with OVS 3

OpenStack (Octavia)
Queens
Yoga
Antelope
Queens
Yoga
Antelope
Queens
Yoga
Antelope
Queens
Yoga
Antelope
Queens
Yoga
Antelope

VMware vSphere 5

7.0, 6.7

7.0, 6.7

7.0, 6.7

Software defined storage

Ceph

18.2.4-4.cve
17.2.4, 16.2.4
18.2.4-3.cve
17.2.3, 16.2.3
18.2.3-2.cve
16.2.2
17.2.7-15.cve
17.1.7, 16.1.7
18.2.3-2.cve
16.2.1
17.2.7-15.cve
17.1.6, 16.1.6
18.2.3-1.release
17.2.0, 16.2.0

Rook

1.13.5-19
17.2.4, 16.2.4
1.13.5-18
17.2.3, 16.2.3
1.13.5-16
16.2.2
1.12.10-21
17.1.7, 16.1.7
1.13.5-16
16.2.1
1.12.10-21
17.1.6, 16.1.6
1.13.5-15
17.2.0, 16.2.0

Logging, monitoring, and alerting

StackLight

Container Cloud compatibility matrix 2.26.x

Release

Container Cloud

2.26.5

2.26.4

2.26.3

2.26.2

2.26.1

2.26.0

Release history

Release date

June 18, 2024

May 20, 2024

Apr 29, 2024

Apr 08, 2024

Mar 20, 2024

Mar 04, 2024

Major Cluster releases (managed)

17.1.0 +
MOSK 24.1
MKE 3.7.5

17.0.0 +
MOSK 23.3
MKE 3.7.1

16.1.0
MKE 3.7.5

16.0.0
MKE 3.7.1

Patch Cluster releases (managed)

17.1.x + MOSK 24.1.x

17.1.5+24.1.5
17.1.4+24.1.4
17.1.3+24.1.3
17.1.2+24.1.2
17.1.1+24.1.1

17.1.4+24.1.4
17.1.3+24.1.3
17.1.2+24.1.2
17.1.1+24.1.1


17.1.3+24.1.3
17.1.2+24.1.2
17.1.1+24.1.1



17.1.2+24.1.2
17.1.1+24.1.1




17.1.1+24.1.1




17.0.x + MOSK 23.3.x

17.0.4+23.3.4
17.0.4+23.3.4
17.0.4+23.3.4
17.0.4+23.3.4
17.0.4+23.3.4
17.0.4+23.3.4
17.0.3+23.3.3
17.0.2+23.3.2
17.0.1+23.3.1

16.1.x

16.1.5
16.1.4
16.1.3
16.1.2
16.1.1

16.1.4
16.1.3
16.1.2
16.1.1


16.1.3
16.1.2
16.1.1



16.1.2
16.1.1




16.1.1




16.0.x

16.0.4
16.0.4
16.0.4
16.0.4
16.0.4
16.0.4
16.0.3
16.0.2
16.0.1

Fully managed cluster

Mirantis Kubernetes Engine (MKE)

3.7.8
17.1.5, 16.1.5
3.7.8
17.1.4, 16.1.4
3.7.7
17.1.3, 16.1.3
3.7.6
17.1.2, 16.1.2
3.7.5
17.1.1, 16.1.1
3.7.5
17.1.0, 16.1.0

Attached managed cluster

MKE 7

3.6.8
19.1.0
3.6.1
19.0.0
3.5.5
18.1.0
3.5.3
18.0.0
3.6.8
19.1.0
3.6.1
19.0.0
3.5.5
18.1.0
3.5.3
18.0.0
3.6.8
19.1.0
3.6.1
19.0.0
3.5.5
18.1.0
3.5.3
18.0.0
3.6.8
19.1.0
3.6.1
19.0.0
3.5.5
18.1.0
3.5.3
18.0.0
3.6.8
19.1.0
3.6.1
19.0.0
3.5.5
18.1.0
3.5.3
18.0.0
3.6.8
19.1.0
3.6.1
19.0.0
3.5.5
18.1.0
3.5.3
18.0.0

Container orchestration

Kubernetes

1.27 17.1.x, 16.1.x

1.27 17.1.x, 16.1.x

1.27 17.1.x, 16.1.x

1.27 17.1.x, 16.1.x

1.27 17.1.x, 16.1.x

1.27 17.1.x, 16.1.x

Container runtime

Mirantis Container Runtime (MCR)

23.0.9 17.1.x, 16.1.x 2

23.0.9 17.1.x, 16.1.x 2

23.0.9 17.1.x, 16.1.x 2

23.0.9 17.1.x, 16.1.x 2

23.0.9 17.1.x, 16.1.x

23.0.9 17.1.x, 16.1.x

OS distributions

Ubuntu

20.04

20.04

20.04

20.04

20.04

20.04

Infrastructure platform

Bare metal 8

kernel 5.15.0-107-generic
kernel 5.15.0-105-generic
kernel 5.15.0-102-generic
kernel 5.15.0-101-generic
kernel 5.15.0-97-generic
kernel 5.15.0-92-generic

MOSK Yoga or Antelope with OVS 3

OpenStack (Octavia)
Queens
Yoga
Antelope
Queens
Yoga
Antelope
Queens
Yoga
Antelope
Queens
Yoga
Antelope
Queens
Yoga
Antelope
Queens
Yoga
Antelope

VMware vSphere 5

7.0, 6.7

7.0, 6.7

7.0, 6.7

7.0, 6.7

7.0, 6.7

7.0, 6.7

Software defined storage

Ceph

17.2.7-13.cve
17.1.5, 16.1.5
17.2.7-12.cve
17.1.4, 16.1.4
17.2.7-11.cve
17.1.3, 16.1.3
17.2.7-10.release
17.1.2, 16.1.2
17.2.7-9.release
17.1.1, 16.1.1
17.2.7-8.release
17.1.0, 16.1.0

Rook

1.12.10-19
17.1.5, 16.1.5
1.12.10-18
17.1.4, 16.1.4
1.12.10-17
17.1.3, 16.1.3
1.12.10-16
17.1.2, 16.1.2
1.12.10-14
17.1.1, 16.1.1
1.12.10-13
17.1.0, 16.1.0

Logging, monitoring, and alerting

StackLight

Container Cloud compatibility matrix 2.25.x

Release

Container Cloud

2.25.4

2.25.3

2.25.2

2.25.1

2.25.0

Release history

Release date

Jan 10, 2024

Dec 18, 2023

Dec 05, 2023

Nov 27, 2023

Nov 06, 2023

17.0.0 +
MOSK 23.3
MKE 3.7.1

16.0.0
MKE 3.7.1

15.0.1 +
MOSK 23.2
MKE 3.6.5

14.1.0 1
MKE 3.6.6

14.0.1
MKE 3.6.5

12.7.0 +
MOSK 23.1
MKE 3.5.7

11.7.0
MKE 3.5.7

Patch Cluster releases (managed)

17.0.x + MOSK 23.3.x

17.0.4+23.3.4
17.0.3+23.3.3
17.0.2+23.3.2
17.0.1+23.3.1

17.0.3+23.3.3
17.0.2+23.3.2
17.0.1+23.3.1


17.0.2+23.3.2
17.0.1+23.3.1



17.0.1+23.3.1

16.0.x

16.0.4
16.0.3
16.0.2
16.0.1

16.0.3
16.0.2
16.0.1


16.0.2
16.0.1



16.0.1

15.0.x + MOSK 23.2.x

15.0.4+23.2.3

15.0.4+23.2.3

15.0.4+23.2.3

15.0.4+23.2.3

15.0.4+23.2.3

14.0.x

14.0.4

14.0.4

14.0.4

14.0.4

14.0.4

Fully managed cluster

Mirantis Kubernetes Engine (MKE)

3.7.3
Since 17.0.3, 16.0.3
3.7.2
Since 17.0.1, 16.0.1
3.7.1
17.0.0, 16.0.0
3.7.3
Since 17.0.3, 16.0.3
3.7.2
Since 17.0.1, 16.0.1
3.7.1
17.0.0, 16.0.0
3.7.2
Since 17.0.1, 16.0.1
3.7.1
17.0.0, 16.0.0
3.7.2
Since 17.0.1, 16.0.1
3.7.1
17.0.0, 16.0.0
3.7.1
17.0.0, 16.0.0

Attached managed cluster

MKE 7

3.6.8
19.1.0
3.6.1
19.0.0
3.5.5
18.1.0
3.5.3
18.0.0
3.6.8
19.1.0
3.6.1
19.0.0
3.5.5
18.1.0
3.5.3
18.0.0
3.6.8
19.1.0
3.6.1
19.0.0
3.5.5
18.1.0
3.5.3
18.0.0

Container orchestration

Kubernetes

1.27 17.0.x, 16.0.x

1.27 17.0.x, 16.0.x

1.27 17.0.x, 16.0.x

1.27 17.0.x, 16.0.x

1.27 17.0.0, 16.0.0

Container runtime

Mirantis Container Runtime (MCR)

23.0.7 17.0.x, 16.0.x

23.0.7 17.0.x, 16.0.x

23.0.7 17.0.x, 16.0.x

23.0.7 17.0.x, 16.0.x

23.0.7 17.0.0, 16.0.0

OS distributions

Ubuntu

20.04

20.04

20.04

20.04

20.04

Infrastructure platform

Bare metal 8

kernel 5.15.0-86-generic

kernel 5.15.0-86-generic

kernel 5.15.0-86-generic

kernel 5.15.0-86-generic

kernel 5.15.0-86-generic

MOSK Yoga or Antelope with Tungsten Fabric 3

MOSK Yoga or Antelope with OVS 3

OpenStack (Octavia)
Queens
Yoga
Antelope
Queens
Yoga
Antelope
Queens
Yoga
Antelope
Queens
Yoga
Antelope
Queens
Yoga
Antelope

VMware vSphere 5

7.0, 6.7

7.0, 6.7

7.0, 6.7

7.0, 6.7

7.0, 6.7

Software defined storage

Ceph

17.2.6-8.cve
Since 17.0.3, 16.0.3
17.2.6-5.cve
17.0.2, 16.0.2
17.2.6-2.cve
17.0.1, 16.0.1
17.2.6-cve-1
17.0.0, 16.0.0, 14.1.0
17.2.6-8.cve
17.0.3, 16.0.3
17.2.6-5.cve
17.0.2, 16.0.2
17.2.6-2.cve
17.0.1, 16.0.1
17.2.6-cve-1
17.0.0, 16.0.0, 14.1.0
17.2.6-5.cve
17.0.2, 16.0.2
17.2.6-2.cve
17.0.1, 16.0.1
17.2.6-cve-1
17.0.0, 16.0.0, 14.1.0
17.2.6-2.cve
17.0.1, 16.0.1
17.2.6-cve-1
17.0.0, 16.0.0, 14.1.0
17.2.6-cve-1
17.0.0, 16.0.0, 14.1.0

Rook

1.11.11-22
17.0.4, 16.0.4
1.11.11-21
17.0.3, 16.0.3
1.11.11-17
17.0.2, 16.0.2
1.11.11-15
17.0.1, 16.0.1
1.11.11-13
17.0.0, 16.0.0, 14.1.0
1.11.11-21
17.0.3, 16.0.3
1.11.11-17
17.0.2, 16.0.2
1.11.11-15
17.0.1, 16.0.1
1.11.11-13
17.0.0, 16.0.0, 14.1.0
1.11.11-17
17.0.2, 16.0.2
1.11.11-15
S17.0.1, 16.0.1
1.11.11-13
17.0.0, 16.0.0, 14.1.0
1.11.11-15
17.0.1, 16.0.1
1.11.11-13
17.0.0, 16.0.0, 14.1.0
1.11.11-13
17.0.0, 16.0.0, 14.1.0

Logging, monitoring, and alerting

StackLight

Container Cloud compatibility matrix 2.24.x

Release

Container Cloud

2.24.5

2.24.4

2.24.3

2.24.2

2.24.0
2.24.1 0

Release history

Release date

Sep 26, 2023

Sep 14, 2023

Aug 29, 2023

Aug 21, 2023

Jul 20, 2023
Jul 27, 2023

Major Cluster releases (managed)

15.0.1 +
MOSK 23.2
MKE 3.6.5

14.0.1
MKE 3.6.5

14.0.0
MKE 3.6.5

12.7.0 +
MOSK 23.1
MKE 3.5.7

11.7.0
MKE 3.5.7

Patch Cluster releases (managed)

15.0.x + MOSK 23.2.x

15.0.4+23.2.3
15.0.3+23.2.2
15.0.2+23.2.1

15.0.3+23.2.2
15.0.2+23.2.1


15.0.2+23.2.1

14.0.x

14.0.4
14.0.3
14.0.2

14.0.3
14.0.2


14.0.2

Managed cluster

Mirantis Kubernetes Engine (MKE)

3.6.6
Since 15.0.2, 14.0.2
3.6.5
15.0.1, 14.0.1
3.6.6
Since 15.0.2, 14.0.2
3.6.5
15.0.1, 14.0.1
3.6.6
15.0.2, 14.0.2
3.6.5
15.0.1, 14.0.1
3.6.5
15.0.1, 14.0.1
3.6.5
14.0.0

Container orchestration

Kubernetes

1.24
15.0.x, 14.0.x
1.24
15.0.x, 14.0.x
1.24
15.0.x, 14.0.x
1.24
15.0.1, 14.0.1
1.24
14.0.0

Container runtime

Mirantis Container Runtime (MCR)

20.10.17
15.0.x, 14.0.x
20.10.17
15.0.x, 14.0.x
20.10.17 2
15.0.x, 14.0.x
20.10.17
15.0.1, 14.0.1
20.10.17
14.0.0

OS distributions

Ubuntu

20.04

20.04

20.04

20.04

20.04

Infrastructure platform

Bare metal

kernel 5.4.0-150-generic

kernel 5.4.0-150-generic

kernel 5.4.0-150-generic

kernel 5.4.0-150-generic

kernel 5.4.0-150-generic

MOSK Yoga or Antelope with Tungsten Fabric 3

MOSK Yoga or Antelope with OVS 3

OpenStack (Octavia)
Queens
Yoga
Queens
Yoga
Queens
Yoga
Queens
Yoga
Queens
Yoga

VMware vSphere 5

7.0, 6.7

7.0, 6.7

7.0, 6.7

7.0, 6.7

7.0, 6.7

Software defined storage

Ceph 6

17.2.6-cve-1 Since 15.0.2, 14.0.2
17.2.6-rel-5 15.0.1, 14.0.1
17.2.6-cve-1
Since 15.0.2, 14.0.2
17.2.6-rel-5
15.0.1, 14.0.1
17.2.6-cve-1
15.0.2, 14.0.2
17.2.6-rel-5
15.0.1, 14.0.1
17.2.6-rel-5
17.2.6-rel-5
16.2.11-cve-4
16.2.11

Rook 6

1.11.4-12
Since 15.0.3, 14.0.3
1.11.4-11
15.0.2, 14.0.2
1.11.4-10
15.0.1, 14.0.1
1.11.4-12
15.0.3, 14.0.3
1.11.4-11
15.0.2, 14.0.2
1.11.4-10
15.0.1, 14.0.1
1.11.4-11
15.0.2, 14.0.2
1.11.4-10
15.0.1, 14.0.1
1.11.4-10
1.11.4-10
1.10.10-10
1.0.0-20230120144247

Logging, monitoring, and alerting

StackLight

Container Cloud compatibility matrix 2.23.x

Release

Container Cloud

2.23.5

2.23.4

2.23.3

2.23.2

2.23.1

2.23.0

Release history

Release date

Jun 05, 2023

May 22, 2023

May 04, 2023

Apr 20, 2023

Apr 04, 2023

Mar 07, 2023

Major Cluster releases (managed)

12.7.0 +
MOSK 23.1 MKE 3.5.7

12.5.0 +
MOSK 22.5 MKE 3.5.5

11.7.0
MKE 3.5.7

11.6.0
MKE 3.5.5

Patch Cluster releases (managed)

12.7.x + MOSK 23.1.x

12.7.4 + 23.1.4
12.7.3 + 23.1.3
12.7.2 + 23.1.2
12.7.1 + 23.1.1

12.7.3 + 23.1.3
12.7.2 + 23.1.2
12.7.1 + 23.1.1


12.7.2 + 23.1.2
12.7.1 + 23.1.1



12.7.1 + 23.1.1

11.7.x

11.7.4
11.7.3
11.7.2
11.7.1

11.7.3
11.7.2
11.7.1


11.7.2
11.7.1



11.7.1

Managed cluster

Mirantis Kubernetes Engine (MKE)

3.5.7 12.7.x, 11.7.x

3.5.7 12.7.x, 11.7.x

3.5.7 12.7.x, 11.7.x

3.5.7 12.7.x, 11.7.x

3.5.7 12.7.0, 11.7.0

3.5.7 11.7.0

Container orchestration

Kubernetes

1.21 12.7.x, 11.7.x

1.21 12.7.x, 11.7.x

1.21 12.7.x, 11.7.x

1.21 12.7.x, 11.7.x

1.21 12.7.0, 11.7.0

1.21 12.5.0, 11.7.0

Container runtime

Mirantis Container Runtime (MCR) 2

20.10.13

20.10.13

20.10.13

20.10.13

20.10.13

20.10.13

OS distributions

Ubuntu

20.04

20.04

20.04

20.04

20.04

20.04

Infrastructure platform

Bare metal

kernel 5.4.0-137-generic

kernel 5.4.0-137-generic

kernel 5.4.0-137-generic

kernel 5.4.0-137-generic

kernel 5.4.0-137-generic

kernel 5.4.0-137-generic

MOSK Victoria or Yoga with Tungsten Fabric 3

MOSK Victoria or Yoga with OVS 3

OpenStack (Octavia)
Queens
Victoria
Yoga
Queens
Victoria
Yoga
Queens
Victoria
Yoga
Queens
Victoria
Yoga
Queens
Victoria
Yoga
Queens
Victoria
Yoga

VMware vSphere 5

7.0, 6.7

7.0, 6.7

7.0, 6.7

7.0, 6.7

7.0, 6.7

7.0, 6.7

Software defined storage

Ceph 6

16.2.11-cve-4
16.2.11-cve-2
16.2.11
16.2.11-cve-4
16.2.11-cve-2
16.2.11
16.2.11-cve-4
16.2.11-cve-2
16.2.11

16.2.11-cve-2
16.2.11


16.2.11


16.2.11

Rook 6

1.10.10-10
1.10.10-9
1.0.0-20230120144247
1.10.10-10
1.10.10-9
1.0.0-20230120144247
1.10.10-10
1.10.10-9
1.0.0-20230120144247

1.10.10-9
1.0.0-20230120144247


1.0.0-20230120144247


1.0.0-20230120144247

Logging, monitoring, and alerting

StackLight

0

Container Cloud 2.23.5 or 2.24.0 automatically upgrades to the 2.24.1 patch release containing several hot fixes.

1

The major Cluster release 14.1.0 is dedicated for the vSphere provider only. This is the last Cluster release for the vSphere provider based on MCR 20.10 and MKE 3.6.6 with Kubernetes 1.24.

Container Cloud 2.25.1 introduces the patch Cluster release 16.0.1 that supports the vSphere provider on MCR 23.0.7 and MKE 3.7.2 with Kubernetes 1.27. For details, see External vSphere CCM with CSI supporting vSphere 6.7 on Kubernetes 1.27.

2(1,2,3,4,5,6)
  • In Container Cloud 2.26.2, docker-ee-cli is updated to 23.0.10 for MCR 23.0.9 to fix several CVEs.

  • In Container Cloud 2.24.3, docker-ee-cli is updated to 20.10.18 for MCR 20.10.17 to fix the following CVEs: CVE-2023-28840, CVE-2023-28642, CVE-2022-41723.

3(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17)
  • OpenStack Antelope is supported as TechPreview since MOSK 23.3.

  • A Container Cloud cluster based on MOSK Yoga or Antelope with Tungsten Fabric is supported as TechPreview since Container Cloud 2.25.1. Since Container Cloud 2.26.0, support for this configuration is suspended. If you still require this configuration, contact Mirantis support for further information.

  • OpenStack Victoria is supported until September, 2023. MOSK 23.2 is the last release version where OpenStack Victoria packages are updated.

    If you have not already upgraded your OpenStack version to Yoga, Mirantis highly recommends doing this during the course of the MOSK 23.2 series. For details, see MOSK documentation: Upgrade OpenStack.

4(1,2,3,4,5,6,7)

Only Cinder API V3 is supported.

5(1,2,3,4,5)
  • Since Container Cloud 2.27.3 (Cluster release 16.2.3), the VMware vSphere configuration is unsupported. For details, see Deprecation notes.

  • VMware vSphere is supported on RHEL 8.7 or Ubuntu 20.04.

  • RHEL 8.7 is generally available since Cluster releases 16.0.0 and 14.1.0. Before these Cluster releases, it is supported within the Technology Preview features scope.

  • For Ubuntu deployments, Packer builds a vSphere virtual machine template that is based on Ubuntu 20.04 with kernel 5.15.0-116-generic. If you build a VM template manually, we recommend installing the same kernel version 5.15.0-116-generic.

6(1,2,3,4)
  • Ceph Pacific supported in 2.23.0 is automatically updated to Quincy during cluster update to 2.24.0.

  • Ceph Pacific 16.2.11 and Rook 1.0.0-20230120144247 apply to major Cluster releases 12.7.0 and 11.7.0 only.

7(1,2,3)

Attachment of non Container Cloud based MKE clusters is supported only for vSphere-based management clusters on Ubuntu 20.04. Since Container Cloud 2.27.3 (Cluster release 16.2.3), the vSphere-based configuration is unsupported. For details, see Deprecation notes.

8(1,2,3,4,5)

The kernel version of the host operating system is validated by Mirantis and confirmed to be working for the supported use cases. Usage of custom kernel versions or third-party vendor-provided kernels, such as FIPS-enabled, assume full responsibility for validating the compatibility of components in such environments.

9(1,2,3,4,5,6,7,8,9,10,11)
  • On non-MOSK clusters, Ubuntu 22.04 is installed by default on management and managed clusters. Ubuntu 20.04 is not supported.

  • On MOSK clusters:

    • Since Container Cloud 2.28.0 (Cluster releases 17.3.0), Ubuntu 22.04 is generally available for managed clusters. All existing deployments based on Ubuntu 20.04 must be upgraded to 22.04 within the course of 2.28.x. Otherwise, update of managed clusters to 2.29.0 will become impossible and management cluster update to 2.29.1 will be blocked.

    • Before Container Cloud 2.28.0 (Cluster releases 17.2.0, 16.2.0, or earlier), Ubuntu 22.04 is installed by default on management clusters only. And Ubuntu 20.04 is the only supported distribution for managed clusters.

10(1,2,3,4,5,6,7,8)

In Container Cloud 2.27.1, docker-ee-cli is updated to 23.0.13 for MCR 23.0.11 and 23.0.9 to fix several CVEs.

11(1,2,3,4)

In Container Cloud 2.29.1 (Cluster releases 17.3.6, 16.4.1, and 16.3.6), docker-ee-cli is updated to 23.0.17 on MOSK clusters (MCR 23.0.15) and 25.0.9m1 on management clusters (MCR 25.0.8) to fix several CVEs.

See also

Release Notes

Container Cloud web UI browser compatibility

The Container Cloud web UI runs in the browser, separate from any backend software. As such, Mirantis aims to support browsers separately from the backend software in use, although each Container Cloud release is tested with specific browser versions.

Mirantis currently supports the following web browsers for the Container Cloud web UI:

Browser

Supported version

Release date

Supported operating system

Firefox

94.0 or newer

November 2, 2021

Windows, macOS

Google Chrome

96.0.4664 or newer

November 15, 2021

Windows, macOS

Microsoft Edge

95.0.1020 or newer

October 21, 2021

Windows

Caution

This table does not apply to third-party web UIs such as the StackLight or Keycloak endpoints that are available through the Container Cloud web UI. Refer to the official documentation of the corresponding third-party component for details about its supported browsers versions.

To ensure the best user experience, Mirantis recommends that you use the latest version of any of the supported browsers. The use of other browsers or older versions of the browsers we support can result in rendering issues, and can even lead to glitches and crashes in the event that the Container Cloud web UI does not support some JavaScript language features or browser web APIs.

Important

Mirantis does not tie browser support to any particular Container Cloud release.

Mirantis strives to leverage the latest in browser technology to build more performant client software, as well as ensuring that our customers benefit from the latest browser security updates. To this end, our strategy is to regularly move our supported browser versions forward, while also lagging behind the latest releases by approximately one year to give our customers a sufficient upgrade buffer.

See also

Release Notes

Release Notes

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

Releases summary
Container Cloud release


Release date



Supported Cluster releases


Summary



2.29.2

Apr 22, 2025

Container Cloud 2.29.2 is the second patch release of the 2.29.x release series that introduces the following updates:

  • Support for the patch Cluster releases 16.4.2, 16.3.7, and 17.3.7 that represents MOSK patch release 24.3.4.

  • Support for Mirantis Kubernetes Engine to 3.7.20

  • Support for docker-ee-cli 23.0.17 on MOSK clusters (MCR 23.0.15) and 25.0.9m1 on management clusters (MCR 25.0.8)

  • Mandatory migration of container runtime from Docker to containerd

  • Bare metal: update of Ubuntu mirror to ubuntu-2025-03-31-003900 along with update of minor kernel version to 5.15.0-135-generic

  • Ubuntu base image: support for utils that extend NVMe provisioning options

  • Security fixes for CVEs in images

2.29.1

Mar 26, 2025

Container Cloud 2.29.1 is the first patch release of the 2.29.x release series that introduces the following updates:

  • Support for the patch Cluster releases 16.4.1, 16.3.6, and 17.3.6 that represents MOSK patch release 24.3.3.

  • Support for Mirantis Kubernetes Engine to 3.7.20

  • Support for docker-ee-cli 23.0.17 on MOSK clusters (MCR 23.0.15) and 25.0.9m1 on management clusters (MCR 25.0.8)

  • Mandatory migration of container runtime from Docker to containerd

  • Bare metal: update of Ubuntu mirror to ubuntu-2025-03-05-003900 along with update of minor kernel version to 5.15.0-134-generic

  • Security fixes for CVEs in images

2.29.0

Mar 11, 2025

  • Improvements in the CIS Benchmark compliance for Ubuntu Linux 22.04 LTS v2.0.0 L1 Server

  • Support for MKE 3.7.19

  • Support for MCR 25.0.8

  • Switch of the default container runtime from Docker to containerd and mandatory migration to containerd

  • BareMetalHostInventory instead of BareMetalHost

  • Validation of the Subnet object changes against allocated IP addresses

  • Improvements in calculation of update estimates using ClusterUpdatePlan

2.28.5

Feb 03, 2025

Container Cloud 2.28.5 is the fifth patch release of the 2.28.x release series that introduces the following updates:

  • Support for the patch Cluster release 16.3.5 and 17.3.5 that represents MOSK patch release 24.3.2.

  • Support for Mirantis Kubernetes Engine to 3.7.18 and Mirantis Container Runtime 23.0.15, which includes containerd 1.6.36.

  • Optional migration of container runtime from Docker to containerd.

  • Bare metal: update of Ubuntu mirror to ubuntu-2025-01-08-003900 along with update of minor kernel version to 5.15.0-130-generic.

  • Security fixes for CVEs in images.

2.28.4

Jan 06, 2025

Container Cloud 2.28.4 is the fourth patch release of the 2.28.x release series that introduces the following updates:

  • Support for the patch Cluster release 16.3.4 and 17.3.4 that represents MOSK patch release 24.3.1.

  • Support for Mirantis Kubernetes Engine to 3.7.17 and Mirantis Container Runtime 23.0.15, which includes containerd 1.6.36.

  • Optional migration of container runtime from Docker to containerd.

  • Bare metal: update of Ubuntu mirror to ubuntu-2024-12-05-003900 along with update of minor kernel version to 5.15.0-126-generic.

  • Security fixes for CVEs in images.

  • OpenStack provider: suspension of support for cluster deployment and update

2.28.3

Dec 09, 2024

Container Cloud 2.28.3 is the third patch release of the 2.28.x release series that introduces the following updates:

  • Support for the patch Cluster release 16.3.3.

  • Support for the patch Cluster releases 16.2.7 and 17.2.7 that represents MOSK patch release 24.2.5.

  • Bare metal: update of Ubuntu mirror to ubuntu-2024-11-18-003900 along with update of minor kernel version to 5.15.0-125-generic.

  • Security fixes for CVEs in images.

2.28.2

Nov 18, 2024

Container Cloud 2.28.2 is the second patch release of the 2.28.x release series that introduces the following updates:

  • Support for the patch Cluster release 16.3.2.

  • Support for the patch Cluster releases 16.2.6 and 17.2.6 that represents MOSK patch release 24.2.4.

  • Support for MKE 3.7.16.

  • Bare metal: update of Ubuntu mirror to ubuntu-2024-10-28-012906 along with update of minor kernel version to 5.15.0-124-generic.

  • Security fixes for CVEs in images.

2.28.1

Oct 30, 2024

Container Cloud 2.28.1 is the first patch release of the 2.28.x release series that introduces the following updates:

  • Support for the patch Cluster release 16.3.1.

  • Support for the patch Cluster releases 16.2.5 and 17.2.5 that represents MOSK patch release 24.2.3.

  • Support for MKE 3.7.15.

  • Bare metal: update of Ubuntu mirror to ubuntu-2024-10-14-013948 along with update of minor kernel version to 5.15.0-122-generic.

  • Security fixes for CVEs in images.

2.28.0

Oct 16, 2024

  • General availability for Ubuntu 22.04 on MOSK clusters

  • Improvements in the CIS Benchmark compliance for Ubuntu Linux 22.04 LTS v2.0.0 L1 Server

  • Support for MKE 3.7.12 on clusters following the major update path

  • Support for MCR 23.0.14

  • Update group for controller nodes

  • Reboot of machines using update groups

  • Amendments for the ClusterUpdatePlan object

  • Refactoring of delayed auto-update of a management cluster

  • Self-diagnostics for management and managed clusters

  • Configuration of groups in auditd

  • Container Cloud web UI enhancements for the bare metal provider

  • Day-2 operations for bare metal:

    • Updating modules

    • Configuration enhancements for modules

  • StackLight:

    • Monitoring of LCM issues

    • Refactoring of StackLight expiration alerts

  • Documentation enhancements

- Cluster release is deprecated and will become unsupported in one of the following Container Cloud releases.

Container Cloud releases

This section outlines the release notes for the Mirantis Container Cloud GA release. Within the scope of the Container Cloud GA release, major releases are being published continuously with new features, improvements, and critical issues resolutions to enhance the Container Cloud GA version. Between major releases, patch releases that incorporate fixes for CVEs of high and critical severity are being delivered. For details, see Container Cloud releases, Cluster releases (managed), and Patch releases.

Once a new Container Cloud release is available, a management cluster automatically upgrades to a newer consecutive release unless this cluster contains managed clusters with a Cluster release unsupported by the newer Container Cloud release. For more details about the Container Cloud release mechanism, see Reference Architecture: Release Controller.

2.29.2 (current)

The Container Cloud patch release 2.29.2, which is based on the 2.29.0 major release, provides the following updates:

  • Support for the patch Cluster releases 16.3.7 and 16.4.2

  • Support for the patch Cluster release 17.3.7 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 24.3.4

  • Support for Mirantis Kubernetes Engine to 3.7.20

  • Support for docker-ee-cli 23.0.17 on MOSK clusters (MCR 23.0.15) and 25.0.9m1 on management clusters (MCR 25.0.8)

  • Mandatory migration of container runtime from Docker to containerd

  • Bare metal: update of Ubuntu mirror from ubuntu-2025-03-05-003900 to ubuntu-2025-03-31-003900 along with update of minor kernel version from 5.15.0-134-generic to 5.15.0-135-generic

  • Ubuntu base image: support for utils that extend NVMe provisioning options

  • Security fixes for CVEs in images

This patch release also supports the latest major Cluster releases 17.4.0 and 16.4.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster releases instead.

For main deliverables of the parent Container Cloud release of 2.29.0, refer to 2.29.0.

Update notes

This section describes the specific actions you as a cloud operator need to complete before or after your Container Cloud cluster update to the Cluster releases 17.3.7, 16.3.7, or 16.4.2.

Consider the information below as a supplement to the generic update procedures published in MOSK Operations Guide: Automatic upgrade of a management cluster and Update to a patch version.

Post-update actions
Mandatory migration of container runtime from Docker to containerd

Migration of container runtime from Docker to containerd, which is implemented for existing management and managed clusters, becomes mandatory in the scope of Container Cloud 2.29.x. Otherwise, the management cluster update to Container Cloud 2.30.0 will be blocked.

The use of containerd allows for better Kubernetes performance and component update without pod restart when applying fixes for CVEs. For the migration procedure, refer to MOSK Operations Guide: Migrate container runtime from Docker to containerd.

Important

Container runtime migration involves machine cordoning and draining.

Security notes

In total, since Container Cloud 2.29.1, 103 Common Vulnerabilities and Exposures (CVE) have been fixed in 2.29.2: 5 of critical and 98 of high severity.

The table below includes the total numbers of addressed unique and common CVEs in images by product component. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

0

3

3

Common

0

13

13

KaaS core

Unique

2

14

16

Common

2

66

68

StackLight

Unique

2

6

8

Common

3

19

22

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.3.4: Security notes.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.29.2 including the Cluster releases 17.3.7, 16.3.7, and 16.4.2. For the known issues in the related MOSK release, see MOSK release notes 24.3.4: Known issues.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.

Ceph
[50637] Ceph creates second miracephnodedisable object during node disabling

During managed cluster update, if some node is being disabled and at the same time ceph-maintenance-controller is restarted, a second miracephnodedisable object is erroneously created for the node. As a result, the second object fails in the Cleaning state, which blocks managed cluster update.

Workaround

  1. On the affected managed cluster, obtain the list of miracephnodedisable objects:

    kubectl get miracephnodedisable -n ceph-lcm-mirantis
    

    The system response must contain one completed and one failed miracephnodedisable object for the node being disabled. For example:

    NAME                                               AGE   NODE NAME                                        STATE      LAST CHECK             ISSUE
    nodedisable-353ccad2-8f19-4c11-95c9-a783abb531ba   58m   kaas-node-91207a35-3200-41d1-9ba9-388500970981   Ready      2025-03-06T22:04:48Z
    nodedisable-58bbf563-1c76-4319-8c28-363d73a5efef   57m   kaas-node-91207a35-3200-41d1-9ba9-388500970981   Cleaning   2025-03-07T11:59:27Z   host clean up Job 'ceph-lcm-mirantis/host-cleanup-nodedisable-58bbf563-1c76-4319-8c28-363d73a5efef' is failed, check logs
    
  2. Remove the failed miracephnodedisable object. For example:

    kubectl delete miracephnodedisable -n ceph-lcm-mirantis nodedisable-58bbf563-1c76-4319-8c28-363d73a5efef
    
[50566] Ceph upgrade is very slow during patch or major cluster update

Due to the upstream Ceph issue 66717, during CVE upgrade of the Ceph daemon image of Ceph Reef 18.2.4, OSDs may start slow and even fail the starting probe with the following describe output in the rook-ceph-osd-X pod:

 Warning  Unhealthy  57s (x16 over 3m27s)  kubelet  Startup probe failed:
 ceph daemon health check failed with the following output:
> no valid command found; 10 closest matches:
> 0
> 1
> 2
> abort
> assert
> bluefs debug_inject_read_zeros
> bluefs files list
> bluefs stats
> bluestore bluefs device info [<alloc_size:int>]
> config diff
> admin_socket: invalid command

Workaround:

Complete the following steps during every patch or major cluster update of the Cluster releases 17.2.x, 17.3.x, and 17.4.x (until Ceph 18.2.5 becomes supported):

  1. Plan extra time in the maintenance window for the patch cluster update.

    Slow starts will still impact the update procedure, but after completing the following step, the recovery process noticeably shortens without affecting the overall cluster state and data responsiveness.

  2. Select one of the following options:

    • Before the cluster update, set the noout flag:

      ceph osd set noout
      

      Once the Ceph OSDs image upgrade is done, unset the flag:

      ceph osd unset noout
      
    • Monitor the Ceph OSDs image upgrade. If the symptoms of slow start appear, set the noout flag as soon as possible. Once the Ceph OSDs image upgrade is done, unset the flag.

[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.

LCM
[50561] The local-volume-provisioner pod switches to CrashLoopBackOff

After machine disablement and consequent re-enablement, persistent volumes (PVs) provisioned by local-volume-provisioner that are not used by any pod may cause the local-volume-provisioner pod on such machine to switch to the CrashLoopBackOff state.

Workaround:

  1. Identify the ID of the affected local-volume-provisioner:

    kubectl -n kube-system get pods
    

    Example of system response extract:

    local-volume-provisioner-h5lrc   0/1   CrashLoopBackOff   33 (2m3s ago)   90m
    
  2. In the local-volume-provisioner logs, identify the affected PVs. For example:

    kubectl logs -n kube-system local-volume-provisioner-h5lrc | less
    

    Example of system response extract:

    E0304 23:21:31.455148    1 discovery.go:221] Failed to discover local volumes:
    5 error(s) while discovering volumes: [error creating PV "local-pv-1d04ed53"
    for volume at "/mnt/local-volumes/openstack-operator/bind-mounts/vol04":
    persistentvolumes "local-pv-1d04ed53" already exists error creating PV "local-pv-ce2dfc24"
    for volume at "/mnt/local-volumes/openstack-operator/bind-mounts/vol01":
    persistentvolumes "local-pv-ce2dfc24" already exists error creating PV "local-pv-bcb9e4bd"
    for volume at "/mnt/local-volumes/openstack-operator/bind-mounts/vol02":
    persistentvolumes "local-pv-bcb9e4bd" already exists error creating PV "local-pv-c5924ada"
    for volume at "/mnt/local-volumes/openstack-operator/bind-mounts/vol03":
    persistentvolumes "local-pv-c5924ada" already exists error creating PV "local-pv-7c7150cf"
    for volume at "/mnt/local-volumes/openstack-operator/bind-mounts/vol00":
    persistentvolumes "local-pv-7c7150cf" already exists]
    
  3. Delete all PVs that contain the already exists error in logs. For example:

    kubectl delete pv local-pv-1d04ed53
    
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

StackLight
[43474] Custom Grafana dashboards are corrupted

Custom Grafana panels and dashboards may be corrupted after automatic migration of deprecated Angular-based plugins to the React-based ones. For details, see MOSK Deprecation Notes: Angular plugins in Grafana dashboards and the post-update step Back up custom Grafana dashboards in Container Cloud 2.28.4 update notes.

To work around the issue, manually adjust the affected dashboards to restore their custom appearance.

Cluster update
[51339] Cluster upgrade to 2.29.2 is blocked if nodes are not rebooted

Cluster upgrade from Container Cloud 2.29.1 to 2.29.2 is blocked if at least one node of any cluster is not rebooted while applying upgrade. As a workaround, reboot all cluster nodes.

Container Cloud web UI
[50181] Failure to deploy a compact cluster

A compact MOSK cluster fails to be deployed through the Container Cloud web UI due to inability to add any label to the control plane machines along with inability to change dedicatedControlPlane: false using the web UI.

To work around the issue, manually add the required labels using CLI. Once done, the cluster deployment resumes.

[50168] Inability to use a new project right after creation

A newly created project does not display all available tabs in the Container Cloud web UI and contains different access denied errors during first five minutes after creation.

To work around the issue, refresh the browser in five minutes after the project creation.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.29.2 along with the Cluster releases 17.3.7, 16.3.7, and 16.4.2, where applicable. For the list of MOSK addressed issues, if any, see MOSK documentation: Release notes 24.3.4.

  • [51145] [StackLight] Addressed the issue that caused the PrometheusTargetScrapesDuplicate alert to permanently fire on a management cluster that has sf-notifier enabled.

  • [50636] [LCM] Addressed the issue that caused the nfs-common package to be deleted during MOSK cluster update. This package is no longer automatically removed from cluster nodes if a MOSK cluster is deployed with the MariaDB backup hosted on an external NFS backend.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.29.2. For artifacts of the Cluster releases introduced in 2.29.2, see patch Cluster releases 17.3.7, 16.3.7, and 16.4.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

ironic-python-agent.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-caracal-jammy-debug-20250331145024

ironic-python-agent.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-caracal-jammy-debug-20250331145024

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-167-e7a55fd.tgz

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.42.16.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.42.16.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.42.16.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.42.16.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.42.16.tgz

Docker images

ambassador Updated

mirantis.azurecr.io/core/external/nginx:1.42.16

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-29-alpine-20250407132604

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-29-alpine-20250407132309

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-2-29-alpine-20250303153858

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.42.16

ironic

mirantis.azurecr.io/openstack/ironic:caracal-jammy-20250217105151

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:caracal-jammy-20250217105151

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240913123302

kaas-ipam Updated

mirantis.azurecr.io/core/kaas-ipam:1.42.16

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-202a68c-20250203183923

mariadb

mirantis.azurecr.io/general/mariadb:10.6.20-jammy-20241104184039

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20250217103755

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.42.16.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.42.16.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.42.16.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.42.16.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.42.16.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.42.16.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.42.16.tgz

credentials-controller Deprecated

https://binary.mirantis.com/core/helm/credentials-controller-1.42.16.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.42.16.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.42.16.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.42.16.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.42.16.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.42.16.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.42.16.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.42.16.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.42.16.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.42.16.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.42.16.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.42.16.tgz

openstack-provider Deprecated

https://binary.mirantis.com/core/helm/openstack-provider-1.42.16.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.42.16.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.42.16.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.42.16.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.42.16.tgz

secret-controller

https://binary.mirantis.com/core/helm/secret-controller-1.42.16.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.42.16.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.42.16

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.42.16

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.42.16

cert-manager-controller Updated

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-13

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.42.16

credentials-controller Deprecated

mirantis.azurecr.io/core/credentials-controller:1.42.16

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.42.16

frontend Updated

mirantis.azurecr.io/core/frontend:1.42.16

host-os-modules-controller Updated

mirantis.azurecr.io/core/host-os-modules-controller:1.42.16

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.42.16

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.42.16

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.42.16

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.42.16

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.42.16

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.42.16

mcc-cache-warmup Updated

mirantis.azurecr.io/core/mcc-cache-warmup:1.42.16

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.42.16

openstack-cluster-api-controller Deprecated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.42.16

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.42.16

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.42.16

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-15

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.42.16

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.42.16

secret-controller Updated

mirantis.azurecr.io/core/secret-controller:1.42.16

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.42.16

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/core/helm/iam-1.42.16.tgz

Docker images

kubectl

mirantis.azurecr.io/general/kubectl:20250307094924

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-d06c869-20250204085201

mariadb

mirantis.azurecr.io/general/mariadb:10.6.20-jammy-20250218081722

mcc-keycloak

mirantis.azurecr.io/iam/mcc-keycloak:25.0.6-20250304083438

Unsupported releases
Unsupported Container Cloud releases history - 2025

Version

Release date

Summary

2.29.1

Mar 26, 2025

Container Cloud 2.29.1 is the first patch release of the 2.29.x release series that introduces the following updates:

  • Support for the patch Cluster releases 16.4.1, 16.3.6, and 17.3.6 that represents MOSK patch release 24.3.3.

  • Support for Mirantis Kubernetes Engine to 3.7.20

  • Support for docker-ee-cli 23.0.17 on MOSK clusters (MCR 23.0.15) and 25.0.9m1 on management clusters (MCR 25.0.8)

  • Mandatory migration of container runtime from Docker to containerd

  • Bare metal: update of Ubuntu mirror to ubuntu-2025-03-05-003900 along with update of minor kernel version to 5.15.0-134-generic

  • Security fixes for CVEs in images

2.29.0

Mar 11, 2025

  • Improvements in the CIS Benchmark compliance for Ubuntu Linux 22.04 LTS v2.0.0 L1 Server

  • Support for MKE 3.7.19

  • Support for MCR 25.0.8

  • Switch of the default container runtime from Docker to containerd

  • BareMetalHostInventory instead of BareMetalHost

  • Validation of the Subnet object changes against allocated IP addresses

  • Improvements in calculation of update estimates using ClusterUpdatePlan

2.28.5

Feb 03, 2025

Container Cloud 2.28.5 is the fifth patch release of the 2.28.x release series that introduces the following updates:

  • Support for the patch Cluster release 16.3.5 and 17.3.5 that represents MOSK patch release 24.3.2.

  • Support for Mirantis Kubernetes Engine to 3.7.18 and Mirantis Container Runtime 23.0.15, which includes containerd 1.6.36.

  • Optional migration of container runtime from Docker to containerd.

  • Bare metal: update of Ubuntu mirror to ubuntu-2025-01-08-003900 along with update of minor kernel version to 5.15.0-130-generic.

  • Security fixes for CVEs in images.

2.28.4

Jan 06, 2025

Container Cloud 2.28.4 is the fourth patch release of the 2.28.x release series that introduces the following updates:

  • Support for the patch Cluster release 16.3.4 and 17.3.4 that represents MOSK patch release 24.3.1.

  • Support for Mirantis Kubernetes Engine to 3.7.17 and Mirantis Container Runtime 23.0.15, which includes containerd 1.6.36.

  • Optional migration of container runtime from Docker to containerd.

  • Bare metal: update of Ubuntu mirror to ubuntu-2024-12-05-003900 along with update of minor kernel version to 5.15.0-126-generic.

  • Security fixes for CVEs in images.

  • OpenStack provider: suspension of support for cluster deployment and update

2.29.1

The Container Cloud patch release 2.29.1, which is based on the 2.29.0 major release, provides the following updates:

  • Support for the patch Cluster releases 16.3.6 and 16.4.1

  • Support for the patch Cluster release 17.3.6 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 24.3.3

  • Support for Mirantis Kubernetes Engine to 3.7.20

  • Support for docker-ee-cli 23.0.17 on MOSK clusters (MCR 23.0.15) and 25.0.9m1 on management clusters (MCR 25.0.8)

  • Mandatory migration of container runtime from Docker to containerd

  • Bare metal: update of Ubuntu mirror to ubuntu-2025-03-05-003900 along with update of minor kernel version to 5.15.0-134-generic

  • Security fixes for CVEs in images

This patch release also supports the latest major Cluster releases 17.4.0 and 16.4.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster releases instead.

For main deliverables of the parent Container Cloud release of 2.29.0, refer to 2.29.0.

Update notes

This section describes the specific actions you as a cloud operator need to complete before or after your Container Cloud cluster update to the Cluster releases 17.3.6, 16.3.6, or 16.4.1.

Consider the information below as a supplement to the generic update procedures published in MOSK Operations Guide: Automatic upgrade of a management cluster and Update to a patch version.

Pre-update actions
Update managed clusters to Ubuntu 22.04

Management cluster update to Container Cloud 2.29.1 will be blocked if at least one node of any related managed cluster is running Ubuntu 20.04, which reaches end-of-life in April 2025. Moreover, in Container Cloud 2.29.0, the Cluster release update of the Ubuntu 20.04-based managed clusters became impossible, and Ubuntu 22.04 became the only supported version of the operating system.

Therefore, ensure that every node of all your managed clusters are running Ubuntu 22.04 to unblock management cluster update in Container Cloud 2.29.1 and managed cluster update in Container Cloud 2.29.0.

For the update procedure, refer to Mirantis OpenStack for Kubernetes documentation: Bare metal operations - Upgrade an operating system distribution.

Note

Existing management clusters were automatically updated to Ubuntu 22.04 during cluster upgrade to the Cluster release 16.2.0 in Container Cloud 2.27.0. Greenfield deployments of management clusters are also based on Ubuntu 22.04.

Post-update actions
Migration of container runtime from Docker to containerd

Since Container Cloud 2.28.4, Mirantis introduced an optional migration of container runtime from Docker to containerd, which is implemented for existing management and managed bare metal clusters. This migration becomes mandatory in the scope of Container Cloud 2.29.x. Otherwise, the management cluster update to Container Cloud 2.30.0 will be blocked.

The use of containerd allows for better Kubernetes performance and component update without pod restart when applying fixes for CVEs. For the migration procedure, refer to MOSK Operations Guide: Migrate container runtime from Docker to containerd.

Important

Container runtime migration involves machine cordoning and draining.

Security notes

In total, since Container Cloud 2.29.0, 203 Common Vulnerabilities and Exposures (CVE) have been fixed in 2.29.1: 21 of critical and 182 of high severity.

The table below includes the total numbers of addressed unique and common CVEs in images by product component since Container Cloud 2.29.0. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

1

10

11

Common

2

58

60

Kaas core

Unique

7

27

34

Common

13

78

91

StackLight

Unique

3

13

16

Common

6

46

52

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.3.3: Security notes.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.29.1 including the Cluster releases 17.3.6, 16.3.6, and 16.4.1. For the known issues in the related MOSK release, see MOSK release notes 24.3.3: Known issues.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.

Ceph
[50637] Ceph creates second miracephnodedisable object during node disabling

During managed cluster update, if some node is being disabled and at the same time ceph-maintenance-controller is restarted, a second miracephnodedisable object is erroneously created for the node. As a result, the second object fails in the Cleaning state, which blocks managed cluster update.

Workaround

  1. On the affected managed cluster, obtain the list of miracephnodedisable objects:

    kubectl get miracephnodedisable -n ceph-lcm-mirantis
    

    The system response must contain one completed and one failed miracephnodedisable object for the node being disabled. For example:

    NAME                                               AGE   NODE NAME                                        STATE      LAST CHECK             ISSUE
    nodedisable-353ccad2-8f19-4c11-95c9-a783abb531ba   58m   kaas-node-91207a35-3200-41d1-9ba9-388500970981   Ready      2025-03-06T22:04:48Z
    nodedisable-58bbf563-1c76-4319-8c28-363d73a5efef   57m   kaas-node-91207a35-3200-41d1-9ba9-388500970981   Cleaning   2025-03-07T11:59:27Z   host clean up Job 'ceph-lcm-mirantis/host-cleanup-nodedisable-58bbf563-1c76-4319-8c28-363d73a5efef' is failed, check logs
    
  2. Remove the failed miracephnodedisable object. For example:

    kubectl delete miracephnodedisable -n ceph-lcm-mirantis nodedisable-58bbf563-1c76-4319-8c28-363d73a5efef
    
[50566] Ceph upgrade is very slow during patch or major cluster update

Due to the upstream Ceph issue 66717, during CVE upgrade of the Ceph daemon image of Ceph Reef 18.2.4, OSDs may start slow and even fail the starting probe with the following describe output in the rook-ceph-osd-X pod:

 Warning  Unhealthy  57s (x16 over 3m27s)  kubelet  Startup probe failed:
 ceph daemon health check failed with the following output:
> no valid command found; 10 closest matches:
> 0
> 1
> 2
> abort
> assert
> bluefs debug_inject_read_zeros
> bluefs files list
> bluefs stats
> bluestore bluefs device info [<alloc_size:int>]
> config diff
> admin_socket: invalid command

Workaround:

Complete the following steps during every patch or major cluster update of the Cluster releases 17.2.x, 17.3.x, and 17.4.x (until Ceph 18.2.5 becomes supported):

  1. Plan extra time in the maintenance window for the patch cluster update.

    Slow starts will still impact the update procedure, but after completing the following step, the recovery process noticeably shortens without affecting the overall cluster state and data responsiveness.

  2. Select one of the following options:

    • Before the cluster update, set the noout flag:

      ceph osd set noout
      

      Once the Ceph OSDs image upgrade is done, unset the flag:

      ceph osd unset noout
      
    • Monitor the Ceph OSDs image upgrade. If the symptoms of slow start appear, set the noout flag as soon as possible. Once the Ceph OSDs image upgrade is done, unset the flag.

[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.

LCM
[50561] The local-volume-provisioner pod switches to CrashLoopBackOff

After machine disablement and consequent re-enablement, persistent volumes (PVs) provisioned by local-volume-provisioner that are not used by any pod may cause the local-volume-provisioner pod on such machine to switch to the CrashLoopBackOff state.

Workaround:

  1. Identify the ID of the affected local-volume-provisioner:

    kubectl -n kube-system get pods
    

    Example of system response extract:

    local-volume-provisioner-h5lrc   0/1   CrashLoopBackOff   33 (2m3s ago)   90m
    
  2. In the local-volume-provisioner logs, identify the affected PVs. For example:

    kubectl logs -n kube-system local-volume-provisioner-h5lrc | less
    

    Example of system response extract:

    E0304 23:21:31.455148    1 discovery.go:221] Failed to discover local volumes:
    5 error(s) while discovering volumes: [error creating PV "local-pv-1d04ed53"
    for volume at "/mnt/local-volumes/openstack-operator/bind-mounts/vol04":
    persistentvolumes "local-pv-1d04ed53" already exists error creating PV "local-pv-ce2dfc24"
    for volume at "/mnt/local-volumes/openstack-operator/bind-mounts/vol01":
    persistentvolumes "local-pv-ce2dfc24" already exists error creating PV "local-pv-bcb9e4bd"
    for volume at "/mnt/local-volumes/openstack-operator/bind-mounts/vol02":
    persistentvolumes "local-pv-bcb9e4bd" already exists error creating PV "local-pv-c5924ada"
    for volume at "/mnt/local-volumes/openstack-operator/bind-mounts/vol03":
    persistentvolumes "local-pv-c5924ada" already exists error creating PV "local-pv-7c7150cf"
    for volume at "/mnt/local-volumes/openstack-operator/bind-mounts/vol00":
    persistentvolumes "local-pv-7c7150cf" already exists]
    
  3. Delete all PVs that contain the already exists error in logs. For example:

    kubectl delete pv local-pv-1d04ed53
    
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

StackLight
[51145] PrometheusTargetScrapesDuplicate firing on a management cluster

Fixed in 2.29.2 (17.3.7, 16.3.7, and 16.4.2)

On management clusters with sf-notifier enabled, the PrometheusTargetScrapesDuplicate alert is permanently firing while sf-notifier runs with no errors.

You can safely disregard the issue because it does not affect cluster health.

[43474] Custom Grafana dashboards are corrupted

Custom Grafana panels and dashboards may be corrupted after automatic migration of deprecated Angular-based plugins to the React-based ones. For details, see MOSK Deprecation Notes: Angular plugins in Grafana dashboards and the post-update step Back up custom Grafana dashboards in Container Cloud 2.28.4 update notes.

To work around the issue, manually adjust the affected dashboards to restore their custom appearance.

Container Cloud web UI
[50181] Failure to deploy a compact cluster

A compact MOSK cluster fails to be deployed through the Container Cloud web UI due to inability to add any label to the control plane machines along with inability to change dedicatedControlPlane: false using the web UI.

To work around the issue, manually add the required labels using CLI. Once done, the cluster deployment resumes.

[50168] Inability to use a new project right after creation

A newly created project does not display all available tabs in the Container Cloud web UI and contains different access denied errors during first five minutes after creation.

To work around the issue, refresh the browser in five minutes after the project creation.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.29.1 along with the Cluster releases 17.3.6, 16.3.6, and 16.4.1, where applicable. For the list of MOSK addressed issues, if any, see MOSK documentation: Release notes 24.3.3.

  • [50768] [LCM] Addressed the issue that prevented successful editing of the MCCUpgrade object, which contained the Internal error failed to call webhook: the server could not find the requested resource when trying to save changes in the object.

  • [50622] [core] Addressed the issue that prevented any user except m:kaas@management-admin to access or modify BareMetalHostInventory objects.

  • [50287] [bare metal] Addressed the issue that prevented a BareMetalHost object with a Redfish Baseboard Management Controller address to pass the registering phase.

  • [50140] [Container Cloud web UI] Addressed the issue that prevented the Clusters page for the bare metal provider to display information about the Ceph cluster in the Ceph Clusters tab.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.29.1. For artifacts of the Cluster releases introduced in 2.29.1, see patch Cluster releases 17.3.6, 16.3.6, and 16.4.1.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

ironic-python-agent.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-caracal-jammy-debug-20250305171003

ironic-python-agent.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-caracal-jammy-debug-20250305171003

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-167-e7a55fd.tgz

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.42.13.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.42.13.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.42.13.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.42.13.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.42.13.tgz

Docker images

ambassador Updated

mirantis.azurecr.io/core/external/nginx:1.42.13

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-29-alpine-20250319110626

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-29-alpine-20250318131222

bm-collective Updated

mirantis.azurecr.io/bm/bm-collective:base-2-29-alpine-20250303153858

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.42.13

ironic Updated

mirantis.azurecr.io/openstack/ironic:caracal-jammy-20250217105151

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:caracal-jammy-20250217105151

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240913123302

kaas-ipam Updated

mirantis.azurecr.io/core/kaas-ipam:1.42.13

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-202a68c-20250203183923

mariadb

mirantis.azurecr.io/general/mariadb:10.6.20-jammy-20241104184039

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20250217103755

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.42.13.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.42.13.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.42.13.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.42.13.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.42.13.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.42.13.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.42.13.tgz

credentials-controller Deprecated

https://binary.mirantis.com/core/helm/credentials-controller-1.42.13.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.42.13.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.42.13.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.42.13.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.42.13.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.42.13.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.42.13.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.42.13.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.42.13.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.42.13.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.42.13.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.42.13.tgz

openstack-provider Deprecated

https://binary.mirantis.com/core/helm/openstack-provider-1.42.13.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.42.13.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.42.13.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.42.13.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.42.13.tgz

secret-controller

https://binary.mirantis.com/core/helm/secret-controller-1.42.13.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.42.13.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.42.13

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.42.13

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.42.13

cert-manager-controller Updated

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-12

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.42.13

credentials-controller Deprecated

mirantis.azurecr.io/core/credentials-controller:1.42.13

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.42.13

frontend Updated

mirantis.azurecr.io/core/frontend:1.42.13

host-os-modules-controller Updated

mirantis.azurecr.io/core/host-os-modules-controller:1.42.13

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.42.13

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.42.13

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.42.13

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.42.13

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.42.13

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.42.13

mcc-cache-warmup

mirantis.azurecr.io/core/mcc-cache-warmup:1.42.13

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.42.13

openstack-cluster-api-controller Deprecated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.42.13

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.42.13

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.42.13

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-15

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.42.13

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.42.13

secret-controller Updated

mirantis.azurecr.io/core/secret-controller:1.42.13

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.42.13

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/core/helm/iam-1.42.13.tgz

Docker images Updated

kubectl

mirantis.azurecr.io/general/kubectl:20250307094924

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-d06c869-20250204085201

mariadb

mirantis.azurecr.io/general/mariadb:10.6.20-jammy-20250218081722

mcc-keycloak

mirantis.azurecr.io/iam/mcc-keycloak:25.0.6-20250304083438

2.29.0

The Mirantis Container Cloud major release 2.29.0:

  • Introduces support for the Cluster release 17.4.0 that is based on the Cluster release 16.4.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 25.1.

  • Introduces support for the Cluster release 16.4.0 that is based on Mirantis Container Runtime (MCR) 25.0.8 and Mirantis Kubernetes Engine (MKE) 3.7.19 with Kubernetes 1.27.

  • Does not support greenfield deployments on deprecated Cluster releases of the 17.3.x and 16.3.x series. Use the latest available Cluster releases of the series instead.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.29.0.

Enhancements

This section outlines new features and enhancements introduced in the Container Cloud release 2.29.0.

  • For the list of enhancements delivered with the Cluster releases introduced by Container Cloud 2.29.0, see 17.4.0 and 16.4.0.

  • For the list of enhancements delivered with MOSK 25.1 introduced together with Container Cloud 2.29.0, see MOSK release notes 25.1: New features.

BareMetalHostInventory instead of BareMetalHost

To allow the operator use the gitops approach, implemented the BareMetalHostInventory resource that must be used instead of BareMetalHost for adding and modifying configuration of bare metal servers.

The BareMetalHostInventory resource monitors and manages the state of a bare metal server and is created for each Machine with all information about machine hardware configuration.

Each BareMetalHostInventory object is synchronized with an automatically created BareMetalHost object, which is now used for internal purposes of the Container Cloud private API.

Caution

Any change in the BareMetalHost object will be overwitten by BareMetalHostInventory.

For any existing BareMetalHost object, a BareMetalHostInventory object is created automatically during cluster update.

Caution

While the Cluster release the management cluster is 16.4.0, BareMetalHostInventory operations are allowed to m:kaas@management-admin only. Once the management cluster is updated to the Cluster release 16.4.1 (or later), this limitation will be lifted.

Validation of the Subnet object changes against allocated IP addresses

Implemented a validation of the Subnet object changes against already allocated IP addresses. This validation is performed by the Admission Controller. The controller now blocks changes in the Subnet object containing allocated IP addresses that are out of the allocatable IP address space, which is formed by a CIDR address and include/exclude address ranges.

Improvements in calculation of update estimates using ClusterUpdatePlan

Improved calculation of update estimates for a managed cluster that is managed by the ClusterUpdatePlan object. Each step of ClusterUpdatePlan now has more precise estimates that are based on the following calculations:

  • The amount and type of components updated between releases during patch updates

  • The amount of nodes with particular roles in the OpenStack cluster

  • The number of nodes and storage used in the Ceph cluster

Also, the ClusterUpdatePlan object now contains the releaseNotes field that links to MOSK release notes of the target release.

Switch of the default container runtime from Docker to containerd

Switched the default container runtime from Docker to containerd on greenfield management and managed clusters. The use of containerd allows for better Kubernetes performance and component update without pod restart when applying fixes for CVEs.

On existing clusters, perform the mandatory migration from Docker to containerd in the scope of Container Cloud 2.29.x. Otherwise, the management cluster update to Container Cloud 2.30.0 will be blocked.

Important

Container runtime migration involves machine cordoning and draining.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.29.0 along with the Cluster releases 17.4.0 and 16.4.0. For the list of MOSK addressed issues, see MOSK release notes 25.1: Addressed issues.

Note

This section provides descriptions of issues addressed since the last Container Cloud patch release 2.28.5.

For details on addressed issues in earlier patch releases since 2.28.0, which are also included into the major release 2.29.0, refer to 2.28.x patch releases.

  • [47263] [StackLight] Fixed the issue with configuration inconsistencies for requests and limits between the deprecated resourcesPerClusterSize and resources parameters.

  • [44193] [StackLight] Fixed the issue with OpenSearch reaching the 85% disk usage watermark on High Availability clusters that use Local Volume Provisioner, which caused the OpenSearch cluster state to switch to Warning or Critical.

  • [46858] [Container Cloud web UI] Fixed the issue that prevented the drop-down menu from displaying the full list of allowed node labels.

  • [39437] [LCM] Fixed the issue that caused failure to replace a master node and the Kubelet's NodeReady condition is Unknown message in the machine status on the remaining master nodes.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.29.0 including the Cluster releases 17.4.0 and 16.4.0. For the known issues in the related MOSK release, see MOSK release notes 25.1: Known issues.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[50287] BareMetalHost with a Redfish BMC address is stuck on registering phase

Fixed in 2.29.1 (17.3.6, 16.3.6, and 16.4.1)

During addition of a bare metal host containing a Redfish Baseboard Management Controller address with the following exemplary configuration may get stuck during the registering phase:

bmc:
  address: redfish://192.168.1.150/redfish/v1/Systems/1

Workaround:

  1. Open the ironic-config configmap for editing:

    KUBECONFIG=mgmt_kubeconfig kubectl -n kaas edit cm ironic-config
    
  2. In the data:ironic.conf section, add the enabled_firmware_interfaces parameter:

    data:
      ironic.conf: |
    
        [DEFAULT]
        ...
        enabled_firmware_interfaces = redfish,no-firmware
        ...
    
  3. Restart Ironic:

    KUBECONFIG=mgmt_kubeconfig kubectl -n kaas rollout restart deployment/ironic
    
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.

Ceph
[50637] Ceph creates second miracephnodedisable object during node disabling

During managed cluster update, if some node is being disabled and at the same time ceph-maintenance-controller is restarted, a second miracephnodedisable object is erroneously created for the node. As a result, the second object fails in the Cleaning state, which blocks managed cluster update.

Workaround

  1. On the affected managed cluster, obtain the list of miracephnodedisable objects:

    kubectl get miracephnodedisable -n ceph-lcm-mirantis
    

    The system response must contain one completed and one failed miracephnodedisable object for the node being disabled. For example:

    NAME                                               AGE   NODE NAME                                        STATE      LAST CHECK             ISSUE
    nodedisable-353ccad2-8f19-4c11-95c9-a783abb531ba   58m   kaas-node-91207a35-3200-41d1-9ba9-388500970981   Ready      2025-03-06T22:04:48Z
    nodedisable-58bbf563-1c76-4319-8c28-363d73a5efef   57m   kaas-node-91207a35-3200-41d1-9ba9-388500970981   Cleaning   2025-03-07T11:59:27Z   host clean up Job 'ceph-lcm-mirantis/host-cleanup-nodedisable-58bbf563-1c76-4319-8c28-363d73a5efef' is failed, check logs
    
  2. Remove the failed miracephnodedisable object. For example:

    kubectl delete miracephnodedisable -n ceph-lcm-mirantis nodedisable-58bbf563-1c76-4319-8c28-363d73a5efef
    
[50566] Ceph upgrade is very slow during patch or major cluster update

Due to the upstream Ceph issue 66717, during CVE upgrade of the Ceph daemon image of Ceph Reef 18.2.4, OSDs may start slow and even fail the starting probe with the following describe output in the rook-ceph-osd-X pod:

 Warning  Unhealthy  57s (x16 over 3m27s)  kubelet  Startup probe failed:
 ceph daemon health check failed with the following output:
> no valid command found; 10 closest matches:
> 0
> 1
> 2
> abort
> assert
> bluefs debug_inject_read_zeros
> bluefs files list
> bluefs stats
> bluestore bluefs device info [<alloc_size:int>]
> config diff
> admin_socket: invalid command

Workaround:

Complete the following steps during every patch or major cluster update of the Cluster releases 17.2.x, 17.3.x, and 17.4.x (until Ceph 18.2.5 becomes supported):

  1. Plan extra time in the maintenance window for the patch cluster update.

    Slow starts will still impact the update procedure, but after completing the following step, the recovery process noticeably shortens without affecting the overall cluster state and data responsiveness.

  2. Select one of the following options:

    • Before the cluster update, set the noout flag:

      ceph osd set noout
      

      Once the Ceph OSDs image upgrade is done, unset the flag:

      ceph osd unset noout
      
    • Monitor the Ceph OSDs image upgrade. If the symptoms of slow start appear, set the noout flag as soon as possible. Once the Ceph OSDs image upgrade is done, unset the flag.

[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.

LCM
[50768] Failure to update the MCCUpgrade object

Fixed in 2.29.1 (17.3.6, 16.3.6, and 16.4.1)

While editing the MCCUpgrade object, the following error occurs when trying to save changes:

HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure",
"message":"Internal error occurred: failed calling webhook \"mccupgrades.kaas.mirantis.com\":
failed to call webhook: the server could not find the requested resource",
"reason":"InternalError",
"details":{"causes":[{"message":"failed calling webhook \"mccupgrades.kaas.mirantis.com\":
failed to call webhook: the server could not find the requested resource"}]},"code":500}

To work around the issue, remove the name: mccupgrades.kaas.mirantis.com entry from mutatingwebhookconfiguration:

kubectl --kubeconfig kubeconfig edit mutatingwebhookconfiguration admission-controller

Example configuration:

- admissionReviewVersions:
  - v1
  - v1beta1
  clientConfig:
    caBundle: <REDACTED>
    service:
      name: admission-controller
      namespace: kaas
      path: /mccupgrades
      port: 443
  failurePolicy: Fail
  matchPolicy: Equivalent
  name: mccupgrades.kaas.mirantis.com
  namespaceSelector: {}
  objectSelector: {}
  reinvocationPolicy: Never
  rules:
  - apiGroups:
    - kaas.mirantis.com
    apiVersions:
    - v1alpha1
    operations:
    - CREATE
    - UPDATE
    resources:
    - mccupgrades
    scope: '*'
  sideEffects: NoneOnDryRun
  timeoutSeconds: 5
[50561] The local-volume-provisioner pod switches to CrashLoopBackOff

After machine disablement and consequent re-enablement, persistent volumes (PVs) provisioned by local-volume-provisioner that are not used by any pod may cause the local-volume-provisioner pod on such machine to switch to the CrashLoopBackOff state.

Workaround:

  1. Identify the ID of the affected local-volume-provisioner:

    kubectl -n kube-system get pods
    

    Example of system response extract:

    local-volume-provisioner-h5lrc   0/1   CrashLoopBackOff   33 (2m3s ago)   90m
    
  2. In the local-volume-provisioner logs, identify the affected PVs. For example:

    kubectl logs -n kube-system local-volume-provisioner-h5lrc | less
    

    Example of system response extract:

    E0304 23:21:31.455148    1 discovery.go:221] Failed to discover local volumes:
    5 error(s) while discovering volumes: [error creating PV "local-pv-1d04ed53"
    for volume at "/mnt/local-volumes/openstack-operator/bind-mounts/vol04":
    persistentvolumes "local-pv-1d04ed53" already exists error creating PV "local-pv-ce2dfc24"
    for volume at "/mnt/local-volumes/openstack-operator/bind-mounts/vol01":
    persistentvolumes "local-pv-ce2dfc24" already exists error creating PV "local-pv-bcb9e4bd"
    for volume at "/mnt/local-volumes/openstack-operator/bind-mounts/vol02":
    persistentvolumes "local-pv-bcb9e4bd" already exists error creating PV "local-pv-c5924ada"
    for volume at "/mnt/local-volumes/openstack-operator/bind-mounts/vol03":
    persistentvolumes "local-pv-c5924ada" already exists error creating PV "local-pv-7c7150cf"
    for volume at "/mnt/local-volumes/openstack-operator/bind-mounts/vol00":
    persistentvolumes "local-pv-7c7150cf" already exists]
    
  3. Delete all PVs that contain the already exists error in logs. For example:

    kubectl delete pv local-pv-1d04ed53
    
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

StackLight
[51145] PrometheusTargetScrapesDuplicate firing on a management cluster

Fixed in 2.29.2 (17.3.7, 16.3.7, and 16.4.2)

On management clusters with sf-notifier enabled, the PrometheusTargetScrapesDuplicate alert is permanently firing while sf-notifier runs with no errors.

You can safely disregard the issue because it does not affect cluster health.

[43474] Custom Grafana dashboards are corrupted

Custom Grafana panels and dashboards may be corrupted after automatic migration of deprecated Angular-based plugins to the React-based ones. For details, see MOSK Deprecation Notes: Angular plugins in Grafana dashboards and the post-update step Back up custom Grafana dashboards in Container Cloud 2.28.4 update notes.

To work around the issue, manually adjust the affected dashboards to restore their custom appearance.

Container Cloud web UI
[50181] Failure to deploy a compact cluster

A compact MOSK cluster fails to be deployed through the Container Cloud web UI due to inability to add any label to the control plane machines along with inability to change dedicatedControlPlane: false using the web UI.

To work around the issue, manually add the required labels using CLI. Once done, the cluster deployment resumes.

[50168] Inability to use a new project right after creation

A newly created project does not display all available tabs in the Container Cloud web UI and contains different access denied errors during first five minutes after creation.

To work around the issue, refresh the browser in five minutes after the project creation.

[50140] The Ceph Clusters tab does not display Ceph cluster details

Fixed in 2.29.1 (17.3.6, 16.3.6, 16.4.1)

The Clusters page for the bare metal provider does not display information about the Ceph cluster in the Ceph Clusters tab and contains access denied errors.

To work around the issue, verify the Ceph cluster state through CLI. For details, see MOSK documentation: Ceph operations - Verify Ceph.

Components versions

The following table lists major components and their versions delivered in Container Cloud 2.29.0. The components that are newly added, updated, deprecated, or removed as compared to 2.28.0, are marked with a corresponding superscript, for example, admission-controller Updated.

Component

Application/Service

Version

Bare metal Updated

ambassador

1.42.9

baremetal-dnsmasq

base-2-29-alpine-20250217104113

baremetal-operator

base-2-29-alpine-20250217104322

baremetal-provider

1.42.9

bm-collective

base-2-29-alpine-20250217104943

cluster-api-provider-baremetal

1.42.9

ironic

caracal-jammy-20250128120200

ironic-inspector

caracal-jammy-20250128120200

ironic-prometheus-exporter

0.1-20240913123302

kaas-ipam

1.42.9

kubernetes-entrypoint

1.0.1-202a68c-20250203183923

mariadb

10.6.20-jammy-20241104184039

syslog-ng

base-alpine-20250217103755

Container Cloud Updated

admission-controller

1.42.9

agent-controller

1.42.9

byo-cluster-api-controller

1.42.9

ceph-kcc-controller

1.42.9

cert-manager-controller

1.11.0-11

configuration-collector

1.42.9

event-controller

1.42.9

frontend

1.42.9

golang

1.23.6-alpine3.20

iam-controller

1.42.9

kaas-exporter

1.42.9

kproxy

1.42.9

lcm-controller

1.42.9

license-controller

1.42.9

machinepool-controller

1.42.9

nginx

1.42.9

portforward-controller

1.42.9

rbac-controller

1.42.9

registry

2.8.1-15

release-controller

1.42.9

scope-controller

1.42.9

secret-controller

1.42.9

user-controller

1.42.9

IAM Updated

iam

1.42.9

mariadb

10.6.20-jammy-20241104184039

mcc-keycloak

25.0.6-20241114073807

OpenStack Deprecated

host-os-modules-controller

1.42.9

openstack-cluster-api-controller

1.42.9

openstack-provider

1.42.9

Artifacts

This section lists the artifacts of components included in the Container Cloud release 2.29.0. The components that are newly added, updated, deprecated, or removed as compared to 2.28.0, are marked with a corresponding superscript, for example, admission-controller Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

ironic-python-agent.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-caracal-jammy-debug-20250217102957

ironic-python-agent.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-caracal-jammy-debug-20250217102957

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-167-e7a55fd.tgz

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.42.9.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.42.9.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.42.9.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.42.9.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.42.9.tgz

Docker images Updated

ambassador

mirantis.azurecr.io/core/external/nginx:1.42.9

baremetal-dnsmasq

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-29-alpine-20250217104113

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-2-29-alpine-20250217104322

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-2-29-alpine-20250217104943

cluster-api-provider-baremetal

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.42.9

ironic

mirantis.azurecr.io/openstack/ironic:caracal-jammy-20250128120200

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:caracal-jammy-20250128120200

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240913123302

kaas-ipam

mirantis.azurecr.io/core/kaas-ipam:1.42.9

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-202a68c-20250203183923

mariadb

mirantis.azurecr.io/general/mariadb:10.6.20-jammy-20241104184039

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20250217103755

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.42.9.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.42.9.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.42.9.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.42.9.tgz

byo-provider Removed

n/a

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.42.9.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.42.9.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.42.9.tgz

credentials-controller Deprecated

https://binary.mirantis.com/core/helm/credentials-controller-1.42.9.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.42.9.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.42.9.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.42.9.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.42.9.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.42.9.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.42.9.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.42.9.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.42.9.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.42.9.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.42.9.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.42.9.tgz

openstack-provider Deprecated

https://binary.mirantis.com/core/helm/openstack-provider-1.42.9.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.42.9.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.42.9.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.42.9.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.42.9.tgz

secret-controller

https://binary.mirantis.com/core/helm/secret-controller-1.42.9.tgz

squid-proxy Removed

n/a

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.42.9.tgz

Docker images Updated

admission-controller

mirantis.azurecr.io/core/admission-controller:1.42.9

agent-controller

mirantis.azurecr.io/core/agent-controller:1.42.9

byo-cluster-api-controller Removed

n/a

ceph-kcc-controller

mirantis.azurecr.io/core/ceph-kcc-controller:1.42.9

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-11

configuration-collector

mirantis.azurecr.io/core/configuration-collector:1.42.9

credentials-controller Deprecated

mirantis.azurecr.io/core/credentials-controller:1.42.9

event-controller

mirantis.azurecr.io/core/event-controller:1.42.9

frontend

mirantis.azurecr.io/core/frontend:1.42.9

host-os-modules-controller

mirantis.azurecr.io/core/host-os-modules-controller:1.42.9

iam-controller

mirantis.azurecr.io/core/iam-controller:1.42.9

kaas-exporter

mirantis.azurecr.io/core/kaas-exporter:1.42.9

kproxy

mirantis.azurecr.io/core/kproxy:1.42.9

lcm-controller

mirantis.azurecr.io/core/lcm-controller:1.42.9

license-controller

mirantis.azurecr.io/core/license-controller:1.42.9

machinepool-controller

mirantis.azurecr.io/core/machinepool-controller:1.42.9

mcc-cache-warmup

mirantis.azurecr.io/core/mcc-cache-warmup:1.42.9

nginx

mirantis.azurecr.io/core/external/nginx:1.42.9

openstack-cluster-api-controller Deprecated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.42.9

portforward-controller

mirantis.azurecr.io/core/portforward-controller:1.42.9

rbac-controller

mirantis.azurecr.io/core/rbac-controller:1.42.9

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-15

release-controller

mirantis.azurecr.io/core/release-controller:1.42.9

scope-controller

mirantis.azurecr.io/core/scope-controller:1.42.9

secret-controller

mirantis.azurecr.io/core/secret-controller:1.42.9

squid-proxy Removed

n/a

user-controller

mirantis.azurecr.io/core/user-controller:1.42.9

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/core/helm/iam-1.42.9.tgz

Docker images

kubectl

mirantis.azurecr.io/general/kubectl:20240926142019

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-ba8ada4-20240405150338

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.20-jammy-20241104184039

mcc-keycloak Updated

mirantis.azurecr.io/iam/mcc-keycloak:25.0.6-20241114073807

Security notes

In total, since Container Cloud 2.28.5, in 2.29.0, 736 Common Vulnerabilities and Exposures (CVE) have been fixed: 125 of critical and 611 of high severity.

The table below includes the total numbers of addressed unique and common vulnerabilities and exposures (CVE) by product component since the 2.28.5 patch release. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

0

6

6

Common

0

177

177

Kaas core

Unique

1

8

9

Common

88

229

317

StackLight

Unique

7

48

55

Common

37

205

242

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK release notes 25.1: Security notes.

Update notes

This section describes the specific actions you as a cloud operator need to complete before or after your Container Cloud cluster update to the Cluster releases 17.4.0 or 16.4.0. For details on update impact and maintenance window planning, see MOSK Update notes.

Consider the information below as a supplement to the generic update procedures published in MOSK Operations Guide: Workflow and configuration of management cluster upgrade and MOSK Cluster update.

Pre-update actions
Update managed clusters to Ubuntu 22.04

In Container Cloud 2.29.0, the Cluster release update of the Ubuntu 20.04-based managed clusters becomes impossible, and Ubuntu 22.04 becomes the only supported version of the operating system. Therefore, ensure that every node of your managed clusters are running Ubuntu 22.04 to unblock managed cluster update in Container Cloud 2.29.0.

For the update procedure, refer to Mirantis OpenStack for Kubernetes documentation: Bare metal operations - Upgrade an operating system distribution.

Warning

Management cluster update to Container Cloud 2.29.1 will be blocked if at least one node of any related managed cluster is running Ubuntu 20.04.

Note

Existing management clusters were automatically updated to Ubuntu 22.04 during cluster upgrade to the Cluster release 16.2.0 in Container Cloud 2.27.0. Greenfield deployments of management clusters are also based on Ubuntu 22.04.

Back up custom Grafana dashboards on managed clusters

In Container Cloud 2.29.0, Grafana is updated to version 11 where the following deprecated Angular-based plugins are automatically migrated to the React-based ones:

  • Graph (old) -> Time Series

  • Singlestat -> Stat

  • Stat (old) -> Stat

  • Table (old) -> Table

  • Worldmap -> Geomap

This migration may corrupt custom Grafana dashboards that have Angular-based panels. Therefore, if you have such dashboards on managed clusters, back them up and manually upgrade Angular-based panels before updating to the Cluster release 17.4.0 to prevent custom appearance issues after plugin migration.

Note

All Grafana dashboards provided by StackLight are also migrated to React automatically. For the list of default dashboards, see MOSK Operations Guide: View Grafana dashboards.

Caution

For management clusters that are updated automatically, it is important to remove all Angular-based panels and prepare the backup of custom Grafana dashboards before Container Cloud 2.29.0 is released. For details, see Post update notes in 2.28.5 release notes. Otherwise, custom dashboards using Angular-based plugins may be corrupted and must be manually restored without a backup.

Post-update actions
Start using BareMetalHostInventory instead of BareMetalHost

Container Cloud 2.29.0 introduces the BareMetalHostInventory resource that must be used instead of BareMetalHost for adding and modifying configuration of bare metal servers. Therefore, if you need to modify an existing or create a new configuration of a bare metal host, use BareMetalHostInventory.

Each BareMetalHostInventory object is synchronized with an automatically created BareMetalHost object, which is now used for internal purposes of the Container Cloud private API.

Caution

Any change in the BareMetalHost object will be overwitten by BareMetalHostInventory.

For any existing BareMetalHost object, a BareMetalHostInventory object is created automatically during cluster update.

Update passwords for custom Linux accounts

To match CIS Benchmark compliance checks for Ubuntu Linux 22.04 LTS v2.0.0 L1 Server, Container Cloud 2.29.0 introduces new password policies for local (Linux) user accounts. For details, see Improvements in the CIS Benchmark compliance for Ubuntu, MKE, and Docker.

The rules are applied automatically to all cluster nodes during cluster update. Therefore, if you use custom Linux accounts protected by passwords, do not plan any critical maintenance activities right after cluster upgrade as you may need to update Linux user passwords.

Note

By default, during cluster creation, mcc-user is created without a password with an option to add an SSH key.

Migrate container runtime from Docker to containerd

Container Cloud 2.29.0 introduces switching of the default container runtime from Docker to containerd on greenfield management and managed clusters.

On existing clusters, perform the mandatory migration from Docker to containerd in the scope of Container Cloud 2.29.x. Otherwise, the management cluster update to Container Cloud 2.30.0 will be blocked.

Important

Container runtime migration involves machine cordoning and draining.

Note

If you have not upgraded the operating system distribution on your machines to Jammy yet, Mirantis recommends migrating machines from Docker to containerd on managed clusters together with distribution upgrade to minimize the maintenance window.

In this case, ensure that all cluster machines are updated at once during the same maintenance window to prevent machines from running different container runtimes.

2.28.5

The Container Cloud patch release 2.28.5, which is based on the 2.28.0 major release, provides the following updates:

  • Support for the patch Cluster release 16.3.5.

  • Support for the patch Cluster release 17.3.5 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 24.3.2.

  • Support for Mirantis Kubernetes Engine 3.7.18 and Mirantis Container Runtime 23.0.15, which includes containerd 1.6.36.

  • Optional migration of container runtime from Docker to containerd.

  • Bare metal: update of Ubuntu mirror from ubuntu-2024-12-05-003900 to ubuntu-2025-01-08-003900 along with update of minor kernel version from 5.15.0-126-generic to 5.15.0-130-generic.

  • Security fixes for CVEs in images.

This patch release also supports the latest major Cluster releases 17.3.0 and 16.3.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster release instead.

For main deliverables of the parent Container Cloud release of 2.28.5, refer to 2.28.0.

Update notes

This section describes the specific actions you as a cloud operator need to complete before or after your Container Cloud cluster update to the Cluster releases 17.3.5 or 16.3.5.

Consider the information below as a supplement to the generic update procedures published in MOSK Operations Guide: Automatic upgrade of a management cluster and Update to a patch version.

Post-update actions
Optional migration of container runtime from Docker to containerd

Since Container Cloud 2.28.4, Mirantis introduced an optional migration of container runtime from Docker to containerd, which is implemented for existing management and managed bare metal clusters. The use of containerd allows for better Kubernetes performance and component update without pod restart when applying fixes for CVEs. For the migration procedure, refer to MOSK Operations Guide: Migrate container runtime from Docker to containerd.

Note

Container runtime migration becomes mandatory in the scope of Container Cloud 2.29.x. Otherwise, the management cluster update to Container Cloud 2.30.0 will be blocked.

Note

In Containter Cloud 2.28.x series, the default container runtime remains Docker for greenfield deployments. Support for greenfield deployments based on containerd will be announced in one of the following releases.

Important

Container runtime migration involves machine cordoning and draining.

Note

If you have not upgraded the operating system distribution on your machines to Jammy yet, Mirantis recommends migrating machines from Docker to containerd on managed clusters together with distribution upgrade to minimize the maintenance window.

In this case, ensure that all cluster machines are updated at once during the same maintenance window to prevent machines from running different container runtimes.

Back up custom Grafana dashboards

In Container Cloud 2.29.0, Grafana will be updated to version 11 where the following deprecated Angular-based plugins will be automatically migrated to the React-based ones:

  • Graph (old) -> Time Series

  • Singlestat -> Stat

  • Stat (old) -> Stat

  • Table (old) -> Table

  • Worldmap -> Geomap

This migration may corrupt custom Grafana dashboards that have Angular-based panels. Therefore, if you have such dashboards, back them up and manually upgrade Angular-based panels during the course of Container Cloud 2.28.x (Cluster releases 17.3.x and 16.3.x) to prevent custom appearance issues after plugin migration in Container Cloud 2.29.0 (Cluster releases 17.4.0 and 16.4.0).

Note

All Grafana dashboards provided by StackLight are also migrated to React automatically. For the list of default dashboards, see MOSK Operations Guide: View Grafana dashboards.

Warning

For management clusters that are updated automatically, it is important to prepare the backup before Container Cloud 2.29.0 is released. Otherwise, custom dashboards using Angular-based plugins may be corrupted.

For managed clusters, you can perform the backup after the Container Cloud 2.29.0 release date but before updating them to the Cluster release 17.4.0.

Security notes

In total, since Container Cloud 2.28.4, 1 Common Vulnerability and Exposure (CVE) of high severity has been fixed in 2.28.5.

The table below includes the total numbers of addressed unique and common CVEs in images by product component since Container Cloud 2.28.4. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Kaas core

Unique

0

1

1

Common

0

1

1

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.3.2: Security notes.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.28.5 including the Cluster releases 16.3.5 and 17.3.5. For the known issues in the related MOSK release, see MOSK release notes 24.3.2: Known issues.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[47202] Inspection error on bare metal hosts after dnsmasq restart

Note

Moving forward, the workaround for this issue will be moved from Release Notes to MOSK Troubleshooting Guide: Inspection error on bare metal hosts after dnsmasq restart.

If the dnsmasq pod is restarted during the bootstrap of newly added nodes, those nodes may fail to undergo inspection. That can result in inspection error in the corresponding BareMetalHost objects.

The issue can occur when:

  • The dnsmasq pod was moved to another node.

  • DHCP subnets were changed, including addition or removal. In this case, the dhcpd container of the dnsmasq pod is restarted.

    Caution

    If changing or adding of DHCP subnets is required to bootstrap new nodes, wait after changing or adding DHCP subnets until the dnsmasq pod becomes ready, then create BareMetalHost objects.

To verify whether the nodes are affected:

  1. Verify whether the BareMetalHost objects contain the inspection error:

    kubectl get bmh -n <managed-cluster-namespace-name>
    

    Example of system response:

    NAME            STATE         CONSUMER        ONLINE   ERROR              AGE
    test-master-1   provisioned   test-master-1   true                        9d
    test-master-2   provisioned   test-master-2   true                        9d
    test-master-3   provisioned   test-master-3   true                        9d
    test-worker-1   provisioned   test-worker-1   true                        9d
    test-worker-2   provisioned   test-worker-2   true                        9d
    test-worker-3   inspecting                    true     inspection error   19h
    
  2. Verify whether the dnsmasq pod was in Ready state when the inspection of the affected baremetal hosts (test-worker-3 in the example above) was started:

    kubectl -n kaas get pod <dnsmasq-pod-name> -oyaml
    

    Example of system response:

    ...
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: Initialized
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: Ready
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: ContainersReady
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: PodScheduled
      containerStatuses:
      - containerID: containerd://6dbcf2fc4b36ce4c549c9191ab01f72d0236c51d42947675302675e4bfaf4cdf
        image: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq:base-2-28-alpine-20240812132650
        imageID: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq@sha256:3dad3e278add18e69b2608e462691c4823942641a0f0e25e6811e703e3c23b3b
        lastState:
          terminated:
            containerID: containerd://816fcf079cd544acd74e312065de5b5ed4dbf1dc6159fefffff4f644b5e45987
            exitCode: 0
            finishedAt: "2024-10-11T07:38:35Z"
            reason: Completed
            startedAt: "2024-10-10T15:37:45Z"
        name: dhcpd
        ready: true
        restartCount: 2
        started: true
        state:
          running:
            startedAt: "2024-10-11T07:38:37Z"
      ...
    

    In the system response above, the dhcpd container was not ready between "2024-10-11T07:38:35Z" and "2024-10-11T07:38:54Z".

  3. Verify the affected baremetal host. For example:

    kubectl get bmh -n managed-ns test-worker-3 -oyaml
    

    Example of system response:

    ...
    status:
      errorCount: 15
      errorMessage: Introspection timeout
      errorType: inspection error
      ...
      operationHistory:
        deprovision:
          end: null
          start: null
        inspect:
          end: null
          start: "2024-10-11T07:38:19Z"
        provision:
          end: null
          start: null
        register:
          end: "2024-10-11T07:38:19Z"
          start: "2024-10-11T07:37:25Z"
    

    In the system response above, inspection was started at "2024-10-11T07:38:19Z", immediately before the period of the dhcpd container downtime. Therefore, this node is most likely affected by the issue.

Workaround

  1. Reboot the node using the IPMI reset or cycle command.

  2. If the node fails to boot, remove the failed BareMetalHost object and create it again:

    1. Remove BareMetalHost object. For example:

      kubectl delete bmh -n managed-ns test-worker-3
      
    2. Verify that the BareMetalHost object is removed:

      kubectl get bmh -n managed-ns test-worker-3
      
    3. Create a BareMetalHost object from the template. For example:

      kubectl create -f bmhc-test-worker-3.yaml
      kubectl create -f bmh-test-worker-3.yaml
      
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


Ceph
[50566] Ceph upgrade is very slow during patch or major cluster update

Due to the upstream Ceph issue 66717, during CVE upgrade of the Ceph daemon image of Ceph Reef 18.2.4, OSDs may start slow and even fail the starting probe with the following describe output in the rook-ceph-osd-X pod:

 Warning  Unhealthy  57s (x16 over 3m27s)  kubelet  Startup probe failed:
 ceph daemon health check failed with the following output:
> no valid command found; 10 closest matches:
> 0
> 1
> 2
> abort
> assert
> bluefs debug_inject_read_zeros
> bluefs files list
> bluefs stats
> bluestore bluefs device info [<alloc_size:int>]
> config diff
> admin_socket: invalid command

Workaround:

Complete the following steps during every patch or major cluster update of the Cluster releases 17.2.x, 17.3.x, and 17.4.x (until Ceph 18.2.5 becomes supported):

  1. Plan extra time in the maintenance window for the patch cluster update.

    Slow starts will still impact the update procedure, but after completing the following step, the recovery process noticeably shortens without affecting the overall cluster state and data responsiveness.

  2. Select one of the following options:

    • Before the cluster update, set the noout flag:

      ceph osd set noout
      

      Once the Ceph OSDs image upgrade is done, unset the flag:

      ceph osd unset noout
      
    • Monitor the Ceph OSDs image upgrade. If the symptoms of slow start appear, set the noout flag as soon as possible. Once the Ceph OSDs image upgrade is done, unset the flag.

[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.


LCM
[39437] Failure to replace a master node on a Container Cloud cluster

Fixed in 2.29.0 (17.4.0 and 16.4.0)

During the replacement of a master node on a cluster of any type, the process may get stuck with Kubelet's NodeReady condition is Unknown in the machine status on the remaining master nodes.

As a workaround, log in on the affected node and run the following command:

docker restart ucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.


StackLight
[44193] OpenSearch reaches 85% disk usage watermark affecting the cluster state

Fixed in 2.29.0 (17.4.0 and 16.4.0)

On High Availability (HA) clusters that use Local Volume Provisioner (LVP), Prometheus and OpenSearch from StackLight may share the same pool of storage. In such configuration, OpenSearch may approach the 85% disk usage watermark due to the combined storage allocation and usage patterns set by the Persistent Volume Claim (PVC) size parameters for Prometheus and OpenSearch, which consume storage the most.

When the 85% threshold is reached, the affected node is transitioned to the read-only state, preventing shard allocation and causing the OpenSearch cluster state to transition to Warning (Yellow) or Critical (Red).

Caution

The issue and the provided workaround apply only for clusters on which OpenSearch and Prometheus utilize the same storage pool.

To verify that the cluster is affected:

  1. Verify the result of the following formula:

    0.8 × OpenSearch_PVC_Size_GB + Prometheus_PVC_Size_GB > 0.85 × Total_Storage_Capacity_GB
    

    In the formula, define the following values:

    OpenSearch_PVC_Size_GB

    Derived from .values.elasticsearch.persistentVolumeUsableStorageSizeGB, defaulting to .values.elasticsearch.persistentVolumeClaimSize if unspecified. To obtain the OpenSearch PVC size:

    kubectl -n <namespaceName> get cluster <clusterName> -o yaml |\
    yq '.spec.providerSpec.value.helmReleases[] | select(.name == "stacklight") | .values.elasticsearch.persistentVolumeClaimSize '
    

    Example of system response:

    10000Gi
    
    Prometheus_PVC_Size_GB

    Sourced from .values.prometheusServer.persistentVolumeClaimSize. To obtain the Prometheus PVC size:

    kubectl -n <namespaceName> get cluster <clusterName> -o yaml |\
    yq '.spec.providerSpec.value.helmReleases[] | select(.name == "stacklight") | .values.prometheusServer.persistentVolumeClaimSize '
    

    Example of system response:

    4000Gi
    
    Total_Storage_Capacity_GB

    Total capacity of the OpenSearch PVCs. For LVP, the capacity of the storage pool. To obtain the total capacity:

    kubectl get pvc -n stacklight -l app=opensearch-master \
    -o custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage
    

    The system response contains multiple outputs, one per opensearch-master node. Select the capacity for the affected node.

    Note

    Convert the values to GB if they are set in different units.

    If the formula result is positive, it is an early indication that the cluster is affected.

  2. Verify whether the OpenSearchClusterStatusWarning or OpenSearchClusterStatusCritical alert is firing. And if so, verify the following:

    1. Log in to the OpenSearch web UI.

    2. In Management -> Dev Tools, run the following command:

      GET _cluster/allocation/explain
      

      The following system response indicates that the corresponding node is affected:

      "explanation": "the node is above the low watermark cluster setting \
      [cluster.routing.allocation.disk.watermark.low=85%], using more disk space \
      than the maximum allowed [85.0%], actual free: [xx.xxx%]"
      

      Note

      The system response may contain even higher watermark percent than 85.0%, depending on the case.

Workaround:

Warning

The workaround implies adjustement of the retention threshold for OpenSearch. And depending on the new threshold, some old logs will be deleted.

  1. Adjust or set .values.elasticsearch.persistentVolumeUsableStorageSizeGB to a lower value for the affection check formula to be non-positive. For configuration details, see MOSK Operations Guide: StackLight configuration parameters - OpenSearch.

    Mirantis also recommends reserving some space for other PVCs using storage from the pool. Use the following formula to calculate the required space:

    persistentVolumeUsableStorageSizeGB =
    0.84 × ((1 - Reserved_Percentage - Filesystem_Reserve) ×
    Total_Storage_Capacity_GB - Prometheus_PVC_Size_GB) /
    0.8
    

    In the formula, define the following values:

    Reserved_Percentage

    A user-defined variable that specifies what percentage of the total storage capacity should not be used by OpenSearch or Prometheus. This is used to reserve space for other components. It should be expressed as a decimal. For example, for 5% of reservation, Reserved_Percentage is 0.05. Mirantis recommends using 0.05 as a starting point.

    Filesystem_Reserve

    Percentage to deduct for filesystems that may reserve some portion of the available storage, which is marked as occupied. For example, for EXT4, it is 5% by default, so the value must be 0.05.

    Prometheus_PVC_Size_GB

    Sourced from .values.prometheusServer.persistentVolumeClaimSize.

    Total_Storage_Capacity_GB

    Total capacity of the OpenSearch PVCs. For LVP, the capacity of the storage pool. To obtain the total capacity:

    kubectl get pvc -n stacklight -l app=opensearch-master \
    -o custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage
    

    The system response contains multiple outputs, one per opensearch-master node. Select the capacity for the affected node.

    Note

    Convert the values to GB if they are set in different units.

    Calculation of above formula provides a maximum safe storage to allocate for .values.elasticsearch.persistentVolumeUsableStorageSizeGB. Use this formula as a reference for setting .values.elasticsearch.persistentVolumeUsableStorageSizeGB on a cluster.

  2. Wait up to 15-20 mins for OpenSearch to perform the cleaning.

  3. Verify that the cluster is not affected anymore using the procedure above.


Container Cloud web UI
[50181] Failure to deploy a compact cluster

A compact MOSK cluster fails to be deployed through the Container Cloud web UI due to inability to add any label to the control plane machines along with inability to change dedicatedControlPlane: false using the web UI.

To work around the issue, manually add the required labels using CLI. Once done, the cluster deployment resumes.

[50168] Inability to use a new project right after creation

A newly created project does not display all available tabs in the Container Cloud web UI and contains different access denied errors during first five minutes after creation.

To work around the issue, refresh the browser in five minutes after the project creation.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.28.5. For artifacts of the Cluster releases introduced in 2.28.5, see patch Cluster releases 17.3.5 and 16.3.5.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

ironic-python-agent.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-antelope-jammy-debug-20250108133235

ironic-python-agent.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-antelope-jammy-debug-20250108133235

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-167-e7a55fd.tgz

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.41.28.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.41.28.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.41.28.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.41.28.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.41.28.tgz

Docker images

ambassador Updated

mirantis.azurecr.io/core/external/nginx:1.41.28

baremetal-dnsmasq

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-28-alpine-20241022121257

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-2-28-alpine-20241217153430

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-2-28-alpine-20241217153957

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.41.28

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20241128095555

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:antelope-jammy-20241128095555

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240913123302

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-2-28-alpine-20241217153549

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-34a4f54-20240910081335

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-jammy-20240927170336

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20241022120929

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.41.28.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.41.28.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.41.28.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.41.28.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.41.28.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.41.28.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.41.28.tgz

credentials-controller Deprecated

https://binary.mirantis.com/core/helm/credentials-controller-1.41.28.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.41.28.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.41.28.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.41.28.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.41.28.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.41.28.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.41.28.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.41.28.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.41.28.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.41.28.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.41.28.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.41.28.tgz

openstack-provider Deprecated

https://binary.mirantis.com/core/helm/openstack-provider-1.41.28.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.41.28.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.41.28.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.41.28.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.41.28.tgz

secret-controller

https://binary.mirantis.com/core/helm/secret-controller-1.41.28.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.41.28.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.41.28

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.41.28

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.41.28

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-9

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.41.28

credentials-controller Deprecated

mirantis.azurecr.io/core/credentials-controller:1.41.28

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.41.28

frontend Updated

mirantis.azurecr.io/core/frontend:1.41.28

host-os-modules-controller Updated

mirantis.azurecr.io/core/host-os-modules-controller:1.41.28

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.41.28

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.41.28

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.41.28

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.41.28

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.41.28

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.41.28

mcc-cache-warmup Updated

mirantis.azurecr.io/core/mcc-cache-warmup:1.41.28

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.41.28

openstack-cluster-api-controller Deprecated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.41.28

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.41.28

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.41.28

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-14

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.41.28

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.41.28

secret-controller Updated

mirantis.azurecr.io/core/secret-controller:1.41.28

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.41.28

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/core/helm/iam-1.41.28.tgz

Docker images

kubectl

mirantis.azurecr.io/general/kubectl:20240926142019

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-ba8ada4-20240405150338

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240909113408

mcc-keycloak

mirantis.azurecr.io/iam/mcc-keycloak:25.0.6-20241114073807

2.28.4

The Container Cloud patch release 2.28.4, which is based on the 2.28.0 major release, provides the following updates:

  • Support for the patch Cluster release 16.3.4.

  • Support for the patch Cluster release 17.3.4 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 24.3.1.

  • Support for Mirantis Kubernetes Engine to 3.7.17 and Mirantis Container Runtime 23.0.15, which includes containerd 1.6.36.

  • Optional migration of container runtime from Docker to containerd.

  • Bare metal: update of Ubuntu mirror from ubuntu-2024-11-18-003900 to ubuntu-2024-12-05-003900 along with update of minor kernel version from 5.15.0-125-generic to 5.15.0-126-generic.

  • Security fixes for CVEs in images.

  • OpenStack provider: suspension of support for cluster deployment and update. For details, see Deprecation notes.

This patch release also supports the latest major Cluster releases 17.3.0 and 16.3.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster release instead.

For main deliverables of the parent Container Cloud release of 2.28.4, refer to 2.28.0.

Update notes

This section describes the specific actions you as a cloud operator need to complete before or after your Container Cloud cluster update to the Cluster releases 17.3.4 or 16.3.4.

Important

For MOSK deployments, although MOSK 24.3.1 is classified as a patch release, as a cloud operator, you will be performing a major update regardless of the upgrade path: whether you are upgrading from patch 24.2.5 or major version 24.3. For details, see MOSK 24.3.1 release notes: Update notes.

Consider the information below as a supplement to the generic update procedures published in MOSK Operations Guide: Automatic upgrade of a management cluster and Update to a patch version.

Post-update actions
Optional migration of container runtime from Docker to containerd

Container Cloud 2.28.4 introduces an optional migration of container runtime from Docker to containerd, which is implemented for existing management and managed bare metal clusters. The use of containerd allows for better Kubernetes performance and component update without pod restart when applying fixes for CVEs. For the migration procedure, refer to MOSK Operations Guide: Migrate container runtime from Docker to containerd.

Note

Container runtime migration becomes mandatory in the scope of Container Cloud 2.29.x. Otherwise, the management cluster update to Container Cloud 2.30.0 will be blocked.

Note

In Containter Cloud 2.28.x series, the default container runtime remains Docker for greenfield deployments. Support for greenfield deployments based on containerd will be announced in one of the following releases.

Important

Container runtime migration involves machine cordoning and draining.

Note

If you have not upgraded the operating system distribution on your machines to Jammy yet, Mirantis recommends migrating machines from Docker to containerd on managed clusters together with distribution upgrade to minimize the maintenance window.

In this case, ensure that all cluster machines are updated at once during the same maintenance window to prevent machines from running different container runtimes.

Back up custom Grafana dashboards

In Container Cloud 2.29.0, Grafana will be updated to version 11 where the following deprecated Angular-based plugins will be automatically migrated to the React-based ones:

  • Graph (old) -> Time Series

  • Singlestat -> Stat

  • Stat (old) -> Stat

  • Table (old) -> Table

  • Worldmap -> Geomap

This migration may corrupt custom Grafana dashboards that have Angular-based panels. Therefore, if you have such dashboards, back them up and manually upgrade Angular-based panels during the course of Container Cloud 2.28.x (Cluster releases 17.3.x and 16.3.x) to prevent custom appearance issues after plugin migration in Container Cloud 2.29.0 (Cluster releases 17.4.0 and 16.4.0).

Note

All Grafana dashboards provided by StackLight are also migrated to React automatically. For the list of default dashboards, see MOSK Operations Guide: View Grafana dashboards.

Warning

For management clusters that are updated automatically, it is important to prepare the backup before Container Cloud 2.29.0 is released. Otherwise, custom dashboards using Angular-based plugins may be corrupted.

For managed clusters, you can perform the backup after the Container Cloud 2.29.0 release date but before updating them to the Cluster release 17.4.0.

Security notes

In total, since Container Cloud 2.28.3, 158 Common Vulnerabilities and Exposures (CVE) have been fixed in 2.28.4: 10 of critical and 148 of high severity.

The table below includes the total numbers of addressed unique and common CVEs in images by product component since Container Cloud 2.28.3. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

0

3

3

Common

0

7

7

Kaas core

Unique

1

18

19

Common

4

92

96

StackLight

Unique

3

19

22

Common

6

49

55

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.3.1: Security notes.

Addressed issues

The following issues have been addressed in the Container Cloud patch release 2.28.4 along with the patch Cluster releases 16.3.4 and 17.3.4:

  • [30294] [LCM] Fixed the issue that prevented replacement of a manager machine during the calico-node Pod start on a new node that has the same IP address as the node being replaced.

  • [5782] [LCM] Fixed the issue that prevented deployment of a manager machine during node replacement.

  • [5568] [LCM] Fixed the issue that prevented cleaning of resources by the calico-kube-controllers Pod during unsafe or forced deletion of a manager machine.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.28.4 including the Cluster releases 16.3.4 and 17.3.4. For the known issues in the related MOSK release, see MOSK release notes 24.3.1: Known issues.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[47202] Inspection error on bare metal hosts after dnsmasq restart

Note

Moving forward, the workaround for this issue will be moved from Release Notes to MOSK Troubleshooting Guide: Inspection error on bare metal hosts after dnsmasq restart.

If the dnsmasq pod is restarted during the bootstrap of newly added nodes, those nodes may fail to undergo inspection. That can result in inspection error in the corresponding BareMetalHost objects.

The issue can occur when:

  • The dnsmasq pod was moved to another node.

  • DHCP subnets were changed, including addition or removal. In this case, the dhcpd container of the dnsmasq pod is restarted.

    Caution

    If changing or adding of DHCP subnets is required to bootstrap new nodes, wait after changing or adding DHCP subnets until the dnsmasq pod becomes ready, then create BareMetalHost objects.

To verify whether the nodes are affected:

  1. Verify whether the BareMetalHost objects contain the inspection error:

    kubectl get bmh -n <managed-cluster-namespace-name>
    

    Example of system response:

    NAME            STATE         CONSUMER        ONLINE   ERROR              AGE
    test-master-1   provisioned   test-master-1   true                        9d
    test-master-2   provisioned   test-master-2   true                        9d
    test-master-3   provisioned   test-master-3   true                        9d
    test-worker-1   provisioned   test-worker-1   true                        9d
    test-worker-2   provisioned   test-worker-2   true                        9d
    test-worker-3   inspecting                    true     inspection error   19h
    
  2. Verify whether the dnsmasq pod was in Ready state when the inspection of the affected baremetal hosts (test-worker-3 in the example above) was started:

    kubectl -n kaas get pod <dnsmasq-pod-name> -oyaml
    

    Example of system response:

    ...
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: Initialized
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: Ready
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: ContainersReady
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: PodScheduled
      containerStatuses:
      - containerID: containerd://6dbcf2fc4b36ce4c549c9191ab01f72d0236c51d42947675302675e4bfaf4cdf
        image: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq:base-2-28-alpine-20240812132650
        imageID: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq@sha256:3dad3e278add18e69b2608e462691c4823942641a0f0e25e6811e703e3c23b3b
        lastState:
          terminated:
            containerID: containerd://816fcf079cd544acd74e312065de5b5ed4dbf1dc6159fefffff4f644b5e45987
            exitCode: 0
            finishedAt: "2024-10-11T07:38:35Z"
            reason: Completed
            startedAt: "2024-10-10T15:37:45Z"
        name: dhcpd
        ready: true
        restartCount: 2
        started: true
        state:
          running:
            startedAt: "2024-10-11T07:38:37Z"
      ...
    

    In the system response above, the dhcpd container was not ready between "2024-10-11T07:38:35Z" and "2024-10-11T07:38:54Z".

  3. Verify the affected baremetal host. For example:

    kubectl get bmh -n managed-ns test-worker-3 -oyaml
    

    Example of system response:

    ...
    status:
      errorCount: 15
      errorMessage: Introspection timeout
      errorType: inspection error
      ...
      operationHistory:
        deprovision:
          end: null
          start: null
        inspect:
          end: null
          start: "2024-10-11T07:38:19Z"
        provision:
          end: null
          start: null
        register:
          end: "2024-10-11T07:38:19Z"
          start: "2024-10-11T07:37:25Z"
    

    In the system response above, inspection was started at "2024-10-11T07:38:19Z", immediately before the period of the dhcpd container downtime. Therefore, this node is most likely affected by the issue.

Workaround

  1. Reboot the node using the IPMI reset or cycle command.

  2. If the node fails to boot, remove the failed BareMetalHost object and create it again:

    1. Remove BareMetalHost object. For example:

      kubectl delete bmh -n managed-ns test-worker-3
      
    2. Verify that the BareMetalHost object is removed:

      kubectl get bmh -n managed-ns test-worker-3
      
    3. Create a BareMetalHost object from the template. For example:

      kubectl create -f bmhc-test-worker-3.yaml
      kubectl create -f bmh-test-worker-3.yaml
      
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


Ceph
[50566] Ceph upgrade is very slow during patch or major cluster update

Due to the upstream Ceph issue 66717, during CVE upgrade of the Ceph daemon image of Ceph Reef 18.2.4, OSDs may start slow and even fail the starting probe with the following describe output in the rook-ceph-osd-X pod:

 Warning  Unhealthy  57s (x16 over 3m27s)  kubelet  Startup probe failed:
 ceph daemon health check failed with the following output:
> no valid command found; 10 closest matches:
> 0
> 1
> 2
> abort
> assert
> bluefs debug_inject_read_zeros
> bluefs files list
> bluefs stats
> bluestore bluefs device info [<alloc_size:int>]
> config diff
> admin_socket: invalid command

Workaround:

Complete the following steps during every patch or major cluster update of the Cluster releases 17.2.x, 17.3.x, and 17.4.x (until Ceph 18.2.5 becomes supported):

  1. Plan extra time in the maintenance window for the patch cluster update.

    Slow starts will still impact the update procedure, but after completing the following step, the recovery process noticeably shortens without affecting the overall cluster state and data responsiveness.

  2. Select one of the following options:

    • Before the cluster update, set the noout flag:

      ceph osd set noout
      

      Once the Ceph OSDs image upgrade is done, unset the flag:

      ceph osd unset noout
      
    • Monitor the Ceph OSDs image upgrade. If the symptoms of slow start appear, set the noout flag as soon as possible. Once the Ceph OSDs image upgrade is done, unset the flag.

[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.


LCM
[39437] Failure to replace a master node on a Container Cloud cluster

Fixed in 2.29.0 (17.4.0 and 16.4.0)

During the replacement of a master node on a cluster of any type, the process may get stuck with Kubelet's NodeReady condition is Unknown in the machine status on the remaining master nodes.

As a workaround, log in on the affected node and run the following command:

docker restart ucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.


StackLight
[44193] OpenSearch reaches 85% disk usage watermark affecting the cluster state

Fixed in 2.29.0 (17.4.0 and 16.4.0)

On High Availability (HA) clusters that use Local Volume Provisioner (LVP), Prometheus and OpenSearch from StackLight may share the same pool of storage. In such configuration, OpenSearch may approach the 85% disk usage watermark due to the combined storage allocation and usage patterns set by the Persistent Volume Claim (PVC) size parameters for Prometheus and OpenSearch, which consume storage the most.

When the 85% threshold is reached, the affected node is transitioned to the read-only state, preventing shard allocation and causing the OpenSearch cluster state to transition to Warning (Yellow) or Critical (Red).

Caution

The issue and the provided workaround apply only for clusters on which OpenSearch and Prometheus utilize the same storage pool.

To verify that the cluster is affected:

  1. Verify the result of the following formula:

    0.8 × OpenSearch_PVC_Size_GB + Prometheus_PVC_Size_GB > 0.85 × Total_Storage_Capacity_GB
    

    In the formula, define the following values:

    OpenSearch_PVC_Size_GB

    Derived from .values.elasticsearch.persistentVolumeUsableStorageSizeGB, defaulting to .values.elasticsearch.persistentVolumeClaimSize if unspecified. To obtain the OpenSearch PVC size:

    kubectl -n <namespaceName> get cluster <clusterName> -o yaml |\
    yq '.spec.providerSpec.value.helmReleases[] | select(.name == "stacklight") | .values.elasticsearch.persistentVolumeClaimSize '
    

    Example of system response:

    10000Gi
    
    Prometheus_PVC_Size_GB

    Sourced from .values.prometheusServer.persistentVolumeClaimSize. To obtain the Prometheus PVC size:

    kubectl -n <namespaceName> get cluster <clusterName> -o yaml |\
    yq '.spec.providerSpec.value.helmReleases[] | select(.name == "stacklight") | .values.prometheusServer.persistentVolumeClaimSize '
    

    Example of system response:

    4000Gi
    
    Total_Storage_Capacity_GB

    Total capacity of the OpenSearch PVCs. For LVP, the capacity of the storage pool. To obtain the total capacity:

    kubectl get pvc -n stacklight -l app=opensearch-master \
    -o custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage
    

    The system response contains multiple outputs, one per opensearch-master node. Select the capacity for the affected node.

    Note

    Convert the values to GB if they are set in different units.

    If the formula result is positive, it is an early indication that the cluster is affected.

  2. Verify whether the OpenSearchClusterStatusWarning or OpenSearchClusterStatusCritical alert is firing. And if so, verify the following:

    1. Log in to the OpenSearch web UI.

    2. In Management -> Dev Tools, run the following command:

      GET _cluster/allocation/explain
      

      The following system response indicates that the corresponding node is affected:

      "explanation": "the node is above the low watermark cluster setting \
      [cluster.routing.allocation.disk.watermark.low=85%], using more disk space \
      than the maximum allowed [85.0%], actual free: [xx.xxx%]"
      

      Note

      The system response may contain even higher watermark percent than 85.0%, depending on the case.

Workaround:

Warning

The workaround implies adjustement of the retention threshold for OpenSearch. And depending on the new threshold, some old logs will be deleted.

  1. Adjust or set .values.elasticsearch.persistentVolumeUsableStorageSizeGB to a lower value for the affection check formula to be non-positive. For configuration details, see MOSK Operations Guide: StackLight configuration parameters - OpenSearch.

    Mirantis also recommends reserving some space for other PVCs using storage from the pool. Use the following formula to calculate the required space:

    persistentVolumeUsableStorageSizeGB =
    0.84 × ((1 - Reserved_Percentage - Filesystem_Reserve) ×
    Total_Storage_Capacity_GB - Prometheus_PVC_Size_GB) /
    0.8
    

    In the formula, define the following values:

    Reserved_Percentage

    A user-defined variable that specifies what percentage of the total storage capacity should not be used by OpenSearch or Prometheus. This is used to reserve space for other components. It should be expressed as a decimal. For example, for 5% of reservation, Reserved_Percentage is 0.05. Mirantis recommends using 0.05 as a starting point.

    Filesystem_Reserve

    Percentage to deduct for filesystems that may reserve some portion of the available storage, which is marked as occupied. For example, for EXT4, it is 5% by default, so the value must be 0.05.

    Prometheus_PVC_Size_GB

    Sourced from .values.prometheusServer.persistentVolumeClaimSize.

    Total_Storage_Capacity_GB

    Total capacity of the OpenSearch PVCs. For LVP, the capacity of the storage pool. To obtain the total capacity:

    kubectl get pvc -n stacklight -l app=opensearch-master \
    -o custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage
    

    The system response contains multiple outputs, one per opensearch-master node. Select the capacity for the affected node.

    Note

    Convert the values to GB if they are set in different units.

    Calculation of above formula provides a maximum safe storage to allocate for .values.elasticsearch.persistentVolumeUsableStorageSizeGB. Use this formula as a reference for setting .values.elasticsearch.persistentVolumeUsableStorageSizeGB on a cluster.

  2. Wait up to 15-20 mins for OpenSearch to perform the cleaning.

  3. Verify that the cluster is not affected anymore using the procedure above.


Container Cloud web UI
[50181] Failure to deploy a compact cluster

A compact MOSK cluster fails to be deployed through the Container Cloud web UI due to inability to add any label to the control plane machines along with inability to change dedicatedControlPlane: false using the web UI.

To work around the issue, manually add the required labels using CLI. Once done, the cluster deployment resumes.

[50168] Inability to use a new project right after creation

A newly created project does not display all available tabs in the Container Cloud web UI and contains different access denied errors during first five minutes after creation.

To work around the issue, refresh the browser in five minutes after the project creation.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.28.4. For artifacts of the Cluster releases introduced in 2.28.4, see patch Cluster releases 17.3.4 and 16.3.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

ironic-python-agent.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-antelope-jammy-debug-20241205172311

ironic-python-agent.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-antelope-jammy-debug-20241205172311

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-167-e7a55fd.tgz

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.41.26.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.41.26.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.41.26.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.41.26.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.41.26.tgz

Docker images

ambassador Updated

mirantis.azurecr.io/core/external/nginx:1.41.26

baremetal-dnsmasq

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-28-alpine-20241022121257

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-28-alpine-20241217153430

bm-collective Updated

mirantis.azurecr.io/bm/bm-collective:base-2-28-alpine-20241217153957

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.41.26

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20241128095555

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:antelope-jammy-20241128095555

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240913123302

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-2-28-alpine-20241217153549

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-34a4f54-20240910081335

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-jammy-20240927170336

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20241022120929

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.41.26.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.41.26.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.41.26.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.41.26.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.41.26.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.41.26.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.41.26.tgz

credentials-controller

https://binary.mirantis.com/core/helm/credentials-controller-1.41.26.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.41.26.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.41.26.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.41.26.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.41.26.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.41.26.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.41.26.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.41.26.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.41.26.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.41.26.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.41.26.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.41.26.tgz

openstack-provider Deprecated

https://binary.mirantis.com/core/helm/openstack-provider-1.41.26.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.41.26.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.41.26.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.41.26.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.41.26.tgz

secret-controller

https://binary.mirantis.com/core/helm/secret-controller-1.41.26.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.41.26.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.41.26.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.41.26

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.41.26

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.41.26

cert-manager-controller Updated

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-9

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.41.26

credentials-controller Updated

mirantis.azurecr.io/core/credentials-controller:1.41.26

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.41.26

frontend Updated

mirantis.azurecr.io/core/frontend:1.41.26

host-os-modules-controller Updated

mirantis.azurecr.io/core/host-os-modules-controller:1.41.26

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.41.26

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.41.26

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.41.26

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.41.26

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.41.26

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.41.26

mcc-cache-warmup Updated

mirantis.azurecr.io/core/mcc-cache-warmup:1.41.26

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.41.26

openstack-cluster-api-controller Deprecated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.41.26

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.41.26

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.41.26

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-14

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.41.26

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.41.26

secret-controller Updated

mirantis.azurecr.io/core/secret-controller:1.41.26

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.41.26

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/core/helm/iam-1.41.26.tgz

Docker images

kubectl

mirantis.azurecr.io/general/kubectl:20240926142019

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-ba8ada4-20240405150338

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240909113408

mcc-keycloak

mirantis.azurecr.io/iam/mcc-keycloak:25.0.6-20241114073807

Releases delivered in 2024

This section contains historical information on the unsupported Container Cloud releases delivered in 2024. For the latest supported Container Cloud release, see Container Cloud releases.

Unsupported Container Cloud releases 2024

Version

Release date

Summary

2.28.3

Dec 09, 2024

Container Cloud 2.28.3 is the third patch release of the 2.28.x release series that introduces the following updates:

  • Support for the patch Cluster release 16.3.3.

  • Support for the patch Cluster releases 16.2.7 and 17.2.7 that represents MOSK patch release 24.2.5.

  • Bare metal: update of Ubuntu mirror to ubuntu-2024-11-18-003900 along with update of minor kernel version to 5.15.0-125-generic.

  • Security fixes for CVEs in images.

2.28.2

Nov 18, 2024

Container Cloud 2.28.2 is the second patch release of the 2.28.x release series that introduces the following updates:

  • Support for the patch Cluster release 16.3.2.

  • Support for the patch Cluster releases 16.2.6 and 17.2.6 that represents MOSK patch release 24.2.4.

  • Support for MKE 3.7.16.

  • Bare metal: update of Ubuntu mirror to ubuntu-2024-10-28-012906 along with update of minor kernel version to 5.15.0-124-generic.

  • Security fixes for CVEs in images.

2.28.1

Oct 30, 2024

Container Cloud 2.28.1 is the first patch release of the 2.28.x release series that introduces the following updates:

  • Support for the patch Cluster release 16.3.1.

  • Support for the patch Cluster releases 16.2.5 and 17.2.5 that represents MOSK patch release 24.2.3.

  • Support for MKE 3.7.15.

  • Bare metal: update of Ubuntu mirror to ubuntu-2024-10-14-013948 along with update of minor kernel version to 5.15.0-122-generic.

  • Security fixes for CVEs in images.

2.28.0

Oct 16, 2024

  • General availability for Ubuntu 22.04 on MOSK clusters

  • Improvements in the CIS Benchmark compliance for Ubuntu Linux 22.04 LTS v2.0.0 L1 Server

  • Support for MKE 3.7.12 on clusters following the major update path

  • Support for MCR 23.0.14

  • Update group for controller nodes

  • Reboot of machines using update groups

  • Amendments for the ClusterUpdatePlan object

  • Refactoring of delayed auto-update of a management cluster

  • Self-diagnostics for management and managed clusters

  • Configuration of groups in auditd

  • Container Cloud web UI enhancements for the bare metal provider

  • Day-2 operations for bare metal:

    • Updating modules

    • Configuration enhancements for modules

  • StackLight:

    • Monitoring of LCM issues

    • Refactoring of StackLight expiration alerts

  • Documentation enhancements

2.27.4

Sep 16, 2024

Container Cloud 2.27.4 is the fourth patch release of the 2.27.x release series that introduces the following updates:

  • Support for the patch Cluster releases 16.2.4 and 17.2.4 that represents MOSK patch release 24.2.2.

  • Bare metal: update of Ubuntu mirror to ubuntu-2024-08-21-014714 along with update of minor kernel version to 5.15.0-119-generic.

  • Security fixes for CVEs in images.

2.27.3

Aug 27, 2024

Container Cloud 2.27.3 is the third patch release of the 2.27.x release series that introduces the following updates:

  • Support for the patch Cluster releases 16.2.3 and 17.2.3 that represents MOSK patch release 24.2.1.

  • Support for MKE 3.7.12.

  • Improvements in the MKE benchmark compliance (control ID 5.1.5).

  • Bare metal: update of Ubuntu mirror to ubuntu-2024-08-06-014502 along with update of minor kernel version to 5.15.0-117-generic.

  • VMware vSphere: suspension of support for cluster deployment, update, and attachment.

  • Security fixes for CVEs in images.

2.27.2

Aug 05, 2024

Container Cloud 2.27.2 is the second patch release of the 2.27.x release series that introduces the following updates:

  • Support for the patch Cluster release 16.2.2.

  • Support for the patch Cluster releases 16.1.7 and 17.1.7 that represents MOSK patch release 24.1.7.

  • Support for MKE 3.7.11.

  • Bare metal: update of Ubuntu mirror to ubuntu-2024-07-16-014744 along with update of the minor kernel version to 5.15.0-116-generic (Cluster release 16.2.2).

  • Security fixes for CVEs in images.

2.27.1

Jul 16, 2024

Container Cloud 2.27.1 is the first patch release of the 2.27.x release series that introduces the following updates:

  • Support for the patch Cluster release 16.2.1.

  • Support for the patch Cluster releases 16.1.6 and 17.1.6 that represents MOSK patch release 24.1.6.

  • Support for MKE 3.7.10.

  • Support for docker-ee-cli 23.0.13 in MCR 23.0.11 to fix several CVEs.

  • Bare metal: update of Ubuntu mirror to ubuntu-2024-06-27-095142 along with update of minor kernel version to 5.15.0-113-generic.

  • Security fixes for CVEs in images.

  • Bug fixes.

2.27.0

Jul 02, 2024

  • MKE:

    • MKE 3.7.8 for clusters that follow major update path

    • Improvements in the MKE benchmark compliance

  • Bare metal:

    • General availability for Ubuntu 22.04 on bare metal clusters

    • Improvements in the day-2 management API for bare metal clusters

    • Optimization of strict filtering for devices on bare metal clusters

    • Deprecation of SubnetPool and MetalLBConfigTemplate objects

  • LCM:

    • The ClusterUpdatePlan object for a granular cluster update

    • Update groups for worker machines

    • LCM Agent heartbeats

    • Handling secret leftovers using secret-controller

    • MariaDB backup for bare metal and vSphere providers

  • Ceph:

    • Automatic upgrade from Quincy to Reef

    • Support for Rook v1.13

    • Setting a configuration section for Rook parameters

  • StackLight:

    • Monitoring of I/O errors in kernel logs

    • S.M.A.R.T. metrics for creating alert rules on bare metal clusters

    • Improvements for OpenSearch and OpenSearch Indices Grafana dashboards

    • Removal of grafana-image-renderer

2.26.5

June 18, 2024

Container Cloud 2.26.5 is the fifth patch release of the 2.26.x and MOSK 24.1.x release series that introduces the following updates:

  • Support for the patch Cluster releases 16.1.5 and 17.1.5 that represents MOSK patch release 24.1.5.

  • Bare metal: update of Ubuntu mirror to 20.04~20240517090228 along with update of minor kernel version to 5.15.0-107-generic.

  • Security fixes for CVEs in images.

  • Bug fixes.

2.26.4

May 20, 2024

Container Cloud 2.26.4 is the fourth patch release of the 2.26.x and MOSK 24.1.x release series that introduces the following updates:

  • Support for the patch Cluster releases 16.1.4 and 17.1.4 that represents MOSK patch release 24.1.4.

  • Support for MKE 3.7.8.

  • Bare metal: update of Ubuntu mirror to 20.04~20240502102020 along with update of minor kernel version to 5.15.0-105-generic.

  • Security fixes for CVEs in images.

  • Bug fixes.

2.26.3

Apr 29, 2024

Container Cloud 2.26.3 is the third patch release of the 2.26.x and MOSK 24.1.x release series that introduces the following updates:

  • Support for the patch Cluster releases 16.1.3 and 17.1.3 that represents MOSK patch release 24.1.3.

  • Support for MKE 3.7.7.

  • Bare metal: update of Ubuntu mirror to 20.04~20240411171541 along with update of minor kernel version to 5.15.0-102-generic.

  • Security fixes for CVEs in images.

  • Bug fixes.

2.26.2

Apr 08, 2024

Container Cloud 2.26.2 is the second patch release of the 2.26.x and MOSK 24.1.x release series that introduces the following updates:

  • Support for the patch Cluster releases 16.1.2 and 17.1.2 that represents MOSK patch release 24.1.2.

  • Support for MKE 3.7.6.

  • Support for docker-ee-cli 23.0.10 in MCR 23.0.9 to fix the several CVEs.

  • Bare metal: update of Ubuntu mirror to 20.04~20240324172903 along with update of minor kernel version to 5.15.0-101-generic.

  • Security fixes for CVEs in images.

2.26.1

Mar 20, 2024

Container Cloud 2.26.1 is the first patch release of the 2.26.x and MOSK 24.1.x release series that introduces the following updates:

  • Support for the patch Cluster releases 16.1.1 and 17.1.1 that represents MOSK patch release 24.1.1.

  • Support for MKE 3.7.6.

  • Security fixes for CVEs in images.

2.26.0

Mar 04, 2024

  • LCM:

    • Pre-update inspection of pinned product artifacts in a Cluster object

    • Disablement of worker machines on managed clusters

    • Health monitoring of cluster LCM operations

    • Support for MKE 3.7.5 and MCR 23.0.9

  • Security:

    • Support for Kubernetes auditing and profiling on management clusters

    • Policy Controller for validating pod image signatures

    • Configuring trusted certificates for Keycloak

  • Bare metal:

    • Day-2 management API for bare metal clusters

    • Strict filtering for devices on bare metal clusters

    • Dynamic IP allocation for faster host provisioning

    • Cleanup of LVM thin pool volumes during cluster provisioning

    • Wiping a device or partition before a bare metal cluster deployment

    • Container Cloud web UI improvements

  • Ceph:

    • Support for Rook v1.12

    • Support for custom device classes

    • Network policies for Rook Ceph daemons

  • StackLight:

    • Upgraded logging pipeline

    • Support for custom labels during alert injection

  • Documentation enhancements

2.25.4

Jan 10, 2024

Container Cloud 2.25.4 is the fourth patch release of the 2.25.x and MOSK 23.3.x release series that introduces the following updates:

  • Patch Cluster release 17.0.4 for MOSK 23.3.4

  • Patch Cluster release 16.0.4

  • Security fixes for CVEs in images

2.28.3

Important

For MOSK clusters, Container Cloud 2.28.3 is the continuation for MOSK 24.2.x series using the patch Cluster release 17.2.7. For the update path of 24.1, 24.2, and 24.3 series, see MOSK documentation: Release Compatibility Matrix - Managed cluster update schema.

The management cluster of a MOSK 24.2.x cluster is automatically updated to the latest patch Cluster release 16.3.3.

The Container Cloud patch release 2.28.3, which is based on the 2.28.0 major release, provides the following updates:

  • Support for the patch Cluster release 16.3.3.

  • Support for the patch Cluster releases 16.2.7 and 17.2.7 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 24.2.5.

  • Bare metal: update of Ubuntu mirror from ubuntu-2024-10-28-012906 to ubuntu-2024-11-18-003900 along with update of minor kernel version from 5.15.0-124-generic to 5.15.0-125-generic.

  • Security fixes for CVEs in images.

This patch release also supports the latest major Cluster releases 17.3.0 and 16.3.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster release instead.

For main deliverables of the parent Container Cloud release of 2.28.3, refer to 2.28.0.

Security notes

In total, since Container Cloud 2.28.2, 66 Common Vulnerabilities and Exposures (CVE) have been fixed in 2.28.3: 4 of critical and 62 of high severity.

The table below includes the total numbers of addressed unique and common CVEs in images by product component since Container Cloud 2.28.2. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

0

1

1

Common

0

3

3

Kaas core

Unique

0

4

4

Common

0

7

7

StackLight

Unique

4

21

25

Common

4

52

56

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.2.5: Security notes.

Addressed issues

The following issues have been addressed in the Container Cloud patch release 2.28.3 along with the patch Cluster releases 16.3.3, 16.2.7, and 17.2.7:

  • [47594] [StackLight] Fixed the issue with Patroni pods getting stuck in the CrashLoopBackOff state due to the patroni container being terminated with reason: OOMKilled.

  • [47929] [LCM] Fixed the issue with incorrect restrictive permissions set for registry certificate files in /etc/docker/certs.d, which were set to 644 instead of 444.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.28.3 including the Cluster releases 16.2.7, 16.3.3, and 17.2.7.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[47202] Inspection error on bare metal hosts after dnsmasq restart

Note

Moving forward, the workaround for this issue will be moved from Release Notes to MOSK Troubleshooting Guide: Inspection error on bare metal hosts after dnsmasq restart.

If the dnsmasq pod is restarted during the bootstrap of newly added nodes, those nodes may fail to undergo inspection. That can result in inspection error in the corresponding BareMetalHost objects.

The issue can occur when:

  • The dnsmasq pod was moved to another node.

  • DHCP subnets were changed, including addition or removal. In this case, the dhcpd container of the dnsmasq pod is restarted.

    Caution

    If changing or adding of DHCP subnets is required to bootstrap new nodes, wait after changing or adding DHCP subnets until the dnsmasq pod becomes ready, then create BareMetalHost objects.

To verify whether the nodes are affected:

  1. Verify whether the BareMetalHost objects contain the inspection error:

    kubectl get bmh -n <managed-cluster-namespace-name>
    

    Example of system response:

    NAME            STATE         CONSUMER        ONLINE   ERROR              AGE
    test-master-1   provisioned   test-master-1   true                        9d
    test-master-2   provisioned   test-master-2   true                        9d
    test-master-3   provisioned   test-master-3   true                        9d
    test-worker-1   provisioned   test-worker-1   true                        9d
    test-worker-2   provisioned   test-worker-2   true                        9d
    test-worker-3   inspecting                    true     inspection error   19h
    
  2. Verify whether the dnsmasq pod was in Ready state when the inspection of the affected baremetal hosts (test-worker-3 in the example above) was started:

    kubectl -n kaas get pod <dnsmasq-pod-name> -oyaml
    

    Example of system response:

    ...
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: Initialized
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: Ready
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: ContainersReady
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: PodScheduled
      containerStatuses:
      - containerID: containerd://6dbcf2fc4b36ce4c549c9191ab01f72d0236c51d42947675302675e4bfaf4cdf
        image: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq:base-2-28-alpine-20240812132650
        imageID: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq@sha256:3dad3e278add18e69b2608e462691c4823942641a0f0e25e6811e703e3c23b3b
        lastState:
          terminated:
            containerID: containerd://816fcf079cd544acd74e312065de5b5ed4dbf1dc6159fefffff4f644b5e45987
            exitCode: 0
            finishedAt: "2024-10-11T07:38:35Z"
            reason: Completed
            startedAt: "2024-10-10T15:37:45Z"
        name: dhcpd
        ready: true
        restartCount: 2
        started: true
        state:
          running:
            startedAt: "2024-10-11T07:38:37Z"
      ...
    

    In the system response above, the dhcpd container was not ready between "2024-10-11T07:38:35Z" and "2024-10-11T07:38:54Z".

  3. Verify the affected baremetal host. For example:

    kubectl get bmh -n managed-ns test-worker-3 -oyaml
    

    Example of system response:

    ...
    status:
      errorCount: 15
      errorMessage: Introspection timeout
      errorType: inspection error
      ...
      operationHistory:
        deprovision:
          end: null
          start: null
        inspect:
          end: null
          start: "2024-10-11T07:38:19Z"
        provision:
          end: null
          start: null
        register:
          end: "2024-10-11T07:38:19Z"
          start: "2024-10-11T07:37:25Z"
    

    In the system response above, inspection was started at "2024-10-11T07:38:19Z", immediately before the period of the dhcpd container downtime. Therefore, this node is most likely affected by the issue.

Workaround

  1. Reboot the node using the IPMI reset or cycle command.

  2. If the node fails to boot, remove the failed BareMetalHost object and create it again:

    1. Remove BareMetalHost object. For example:

      kubectl delete bmh -n managed-ns test-worker-3
      
    2. Verify that the BareMetalHost object is removed:

      kubectl get bmh -n managed-ns test-worker-3
      
    3. Create a BareMetalHost object from the template. For example:

      kubectl create -f bmhc-test-worker-3.yaml
      kubectl create -f bmh-test-worker-3.yaml
      
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


Ceph
[50566] Ceph upgrade is very slow during patch or major cluster update

Due to the upstream Ceph issue 66717, during CVE upgrade of the Ceph daemon image of Ceph Reef 18.2.4, OSDs may start slow and even fail the starting probe with the following describe output in the rook-ceph-osd-X pod:

 Warning  Unhealthy  57s (x16 over 3m27s)  kubelet  Startup probe failed:
 ceph daemon health check failed with the following output:
> no valid command found; 10 closest matches:
> 0
> 1
> 2
> abort
> assert
> bluefs debug_inject_read_zeros
> bluefs files list
> bluefs stats
> bluestore bluefs device info [<alloc_size:int>]
> config diff
> admin_socket: invalid command

Workaround:

Complete the following steps during every patch or major cluster update of the Cluster releases 17.2.x, 17.3.x, and 17.4.x (until Ceph 18.2.5 becomes supported):

  1. Plan extra time in the maintenance window for the patch cluster update.

    Slow starts will still impact the update procedure, but after completing the following step, the recovery process noticeably shortens without affecting the overall cluster state and data responsiveness.

  2. Select one of the following options:

    • Before the cluster update, set the noout flag:

      ceph osd set noout
      

      Once the Ceph OSDs image upgrade is done, unset the flag:

      ceph osd unset noout
      
    • Monitor the Ceph OSDs image upgrade. If the symptoms of slow start appear, set the noout flag as soon as possible. Once the Ceph OSDs image upgrade is done, unset the flag.

[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.


LCM
[39437] Failure to replace a master node on a Container Cloud cluster

Fixed in 2.29.0 (17.4.0 and 16.4.0)

During the replacement of a master node on a cluster of any type, the process may get stuck with Kubelet's NodeReady condition is Unknown in the machine status on the remaining master nodes.

As a workaround, log in on the affected node and run the following command:

docker restart ucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>

StackLight
[44193] OpenSearch reaches 85% disk usage watermark affecting the cluster state

Fixed in 2.29.0 (17.4.0 and 16.4.0)

On High Availability (HA) clusters that use Local Volume Provisioner (LVP), Prometheus and OpenSearch from StackLight may share the same pool of storage. In such configuration, OpenSearch may approach the 85% disk usage watermark due to the combined storage allocation and usage patterns set by the Persistent Volume Claim (PVC) size parameters for Prometheus and OpenSearch, which consume storage the most.

When the 85% threshold is reached, the affected node is transitioned to the read-only state, preventing shard allocation and causing the OpenSearch cluster state to transition to Warning (Yellow) or Critical (Red).

Caution

The issue and the provided workaround apply only for clusters on which OpenSearch and Prometheus utilize the same storage pool.

To verify that the cluster is affected:

  1. Verify the result of the following formula:

    0.8 × OpenSearch_PVC_Size_GB + Prometheus_PVC_Size_GB > 0.85 × Total_Storage_Capacity_GB
    

    In the formula, define the following values:

    OpenSearch_PVC_Size_GB

    Derived from .values.elasticsearch.persistentVolumeUsableStorageSizeGB, defaulting to .values.elasticsearch.persistentVolumeClaimSize if unspecified. To obtain the OpenSearch PVC size:

    kubectl -n <namespaceName> get cluster <clusterName> -o yaml |\
    yq '.spec.providerSpec.value.helmReleases[] | select(.name == "stacklight") | .values.elasticsearch.persistentVolumeClaimSize '
    

    Example of system response:

    10000Gi
    
    Prometheus_PVC_Size_GB

    Sourced from .values.prometheusServer.persistentVolumeClaimSize. To obtain the Prometheus PVC size:

    kubectl -n <namespaceName> get cluster <clusterName> -o yaml |\
    yq '.spec.providerSpec.value.helmReleases[] | select(.name == "stacklight") | .values.prometheusServer.persistentVolumeClaimSize '
    

    Example of system response:

    4000Gi
    
    Total_Storage_Capacity_GB

    Total capacity of the OpenSearch PVCs. For LVP, the capacity of the storage pool. To obtain the total capacity:

    kubectl get pvc -n stacklight -l app=opensearch-master \
    -o custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage
    

    The system response contains multiple outputs, one per opensearch-master node. Select the capacity for the affected node.

    Note

    Convert the values to GB if they are set in different units.

    If the formula result is positive, it is an early indication that the cluster is affected.

  2. Verify whether the OpenSearchClusterStatusWarning or OpenSearchClusterStatusCritical alert is firing. And if so, verify the following:

    1. Log in to the OpenSearch web UI.

    2. In Management -> Dev Tools, run the following command:

      GET _cluster/allocation/explain
      

      The following system response indicates that the corresponding node is affected:

      "explanation": "the node is above the low watermark cluster setting \
      [cluster.routing.allocation.disk.watermark.low=85%], using more disk space \
      than the maximum allowed [85.0%], actual free: [xx.xxx%]"
      

      Note

      The system response may contain even higher watermark percent than 85.0%, depending on the case.

Workaround:

Warning

The workaround implies adjustement of the retention threshold for OpenSearch. And depending on the new threshold, some old logs will be deleted.

  1. Adjust or set .values.elasticsearch.persistentVolumeUsableStorageSizeGB to a lower value for the affection check formula to be non-positive. For configuration details, see MOSK Operations Guide: StackLight configuration parameters - OpenSearch.

    Mirantis also recommends reserving some space for other PVCs using storage from the pool. Use the following formula to calculate the required space:

    persistentVolumeUsableStorageSizeGB =
    0.84 × ((1 - Reserved_Percentage - Filesystem_Reserve) ×
    Total_Storage_Capacity_GB - Prometheus_PVC_Size_GB) /
    0.8
    

    In the formula, define the following values:

    Reserved_Percentage

    A user-defined variable that specifies what percentage of the total storage capacity should not be used by OpenSearch or Prometheus. This is used to reserve space for other components. It should be expressed as a decimal. For example, for 5% of reservation, Reserved_Percentage is 0.05. Mirantis recommends using 0.05 as a starting point.

    Filesystem_Reserve

    Percentage to deduct for filesystems that may reserve some portion of the available storage, which is marked as occupied. For example, for EXT4, it is 5% by default, so the value must be 0.05.

    Prometheus_PVC_Size_GB

    Sourced from .values.prometheusServer.persistentVolumeClaimSize.

    Total_Storage_Capacity_GB

    Total capacity of the OpenSearch PVCs. For LVP, the capacity of the storage pool. To obtain the total capacity:

    kubectl get pvc -n stacklight -l app=opensearch-master \
    -o custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage
    

    The system response contains multiple outputs, one per opensearch-master node. Select the capacity for the affected node.

    Note

    Convert the values to GB if they are set in different units.

    Calculation of above formula provides a maximum safe storage to allocate for .values.elasticsearch.persistentVolumeUsableStorageSizeGB. Use this formula as a reference for setting .values.elasticsearch.persistentVolumeUsableStorageSizeGB on a cluster.

  2. Wait up to 15-20 mins for OpenSearch to perform the cleaning.

  3. Verify that the cluster is not affected anymore using the procedure above.


Container Cloud web UI
[50181] Failure to deploy a compact cluster

A compact MOSK cluster fails to be deployed through the Container Cloud web UI due to inability to add any label to the control plane machines along with inability to change dedicatedControlPlane: false using the web UI.

To work around the issue, manually add the required labels using CLI. Once done, the cluster deployment resumes.

[50168] Inability to use a new project right after creation

A newly created project does not display all available tabs in the Container Cloud web UI and contains different access denied errors during first five minutes after creation.

To work around the issue, refresh the browser in five minutes after the project creation.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.28.3. For artifacts of the Cluster releases introduced in 2.28.3, see patch Cluster releases 17.2.7, 16.3.3, and 16.2.7.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

ironic-python-agent.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-antelope-jammy-debug-20241118155355

ironic-python-agent.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-antelope-jammy-debug-20241118155355

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-167-e7a55fd.tgz

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.41.23.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.41.23.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.41.23.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.41.23.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.41.23.tgz

Docker images

ambassador Updated

mirantis.azurecr.io/core/external/nginx:1.41.23

baremetal-dnsmasq

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-28-alpine-20241022121257

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-28-alpine-20241111132119

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-2-28-alpine-20241022120001

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.41.23

ironic Updated

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20241128095555

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:antelope-jammy-20241128095555

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240913123302

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-2-28-alpine-20241022122006

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-34a4f54-20240910081335

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-jammy-20240927170336

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20241022120929

Core artifacts
Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.41.23.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.41.23.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.41.23.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.41.23.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.41.23.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.41.23.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.41.23.tgz

credentials-controller

https://binary.mirantis.com/core/helm/credentials-controller-1.41.23.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.41.23.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.41.23.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.41.23.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.41.23.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.41.23.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.41.23.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.41.23.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.41.23.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.41.23.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.41.23.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.41.23.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.41.23.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.41.23.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.41.23.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.41.23.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.41.23.tgz

secret-controller

https://binary.mirantis.com/core/helm/secret-controller-1.41.23.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.41.23.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.41.23

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.41.23

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.41.23

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-8

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.41.23

credentials-controller Updated

mirantis.azurecr.io/core/credentials-controller:1.41.23

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.41.23

frontend Updated

mirantis.azurecr.io/core/frontend:1.41.23

host-os-modules-controller Updated

mirantis.azurecr.io/core/host-os-modules-controller:1.41.23

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.41.23

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.41.23

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.41.23

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.41.23

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.41.23

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.41.23

mcc-cache-warmup Updated

mirantis.azurecr.io/core/mcc-cache-warmup:1.41.23

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.41.23

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.41.23

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.41.23

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.41.23

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-14

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.41.23

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.41.23

secret-controller Updated

mirantis.azurecr.io/core/secret-controller:1.41.23

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.41.23

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/core/helm/iam-1.41.23.tgz

Docker images

kubectl

mirantis.azurecr.io/general/kubectl:20240926142019

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-ba8ada4-20240405150338

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240909113408

mcc-keycloak Updated

mirantis.azurecr.io/iam/mcc-keycloak:25.0.6-20241114073807

2.28.2

Important

For MOSK clusters, Container Cloud 2.28.2 is the continuation for MOSK 24.2.x series using the patch Cluster release 17.2.6. For the update path of 24.1, 24.2, and 24.3 series, see MOSK documentation: Release Compatibility Matrix - Managed cluster update schema.

The management cluster of a MOSK 24.2.x cluster is automatically updated to the latest patch Cluster release 16.3.2.

The Container Cloud patch release 2.28.2, which is based on the 2.28.0 major release, provides the following updates:

  • Support for the patch Cluster release 16.3.2.

  • Support for the patch Cluster releases 16.2.6 and 17.2.6 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 24.2.4.

  • Support for MKE 3.7.16.

  • Bare metal: update of Ubuntu mirror from ubuntu-2024-10-14-013948 to ubuntu-2024-10-28-012906 along with update of minor kernel version from 5.15.0-122-generic to 5.15.0-124-generic.

  • Security fixes for CVEs in images.

This patch release also supports the latest major Cluster releases 17.3.0 and 16.3.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster release instead.

For main deliverables of the parent Container Cloud release of 2.28.2, refer to 2.28.0.

Security notes

In total, since Container Cloud 2.28.1, 15 Common Vulnerabilities and Exposures (CVE) of high severity have been fixed in 2.28.2.

The table below includes the total numbers of addressed unique and common CVEs in images by product component since Container Cloud 2.28.1. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Kaas core

Unique

0

5

5

Common

0

9

9

StackLight

Unique

0

1

1

Common

0

6

6

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.2.4: Security notes.

Addressed issues

The following issues have been addressed in the Container Cloud patch release 2.28.2 along with the patch Cluster releases 16.3.2, 16.2.6, and 17.2.6.

  • [47741] [LCM] Fixed the issue with upgrade to MKE 3.7.15 getting stuck due to the leftover ucp-upgrade-check-images service that is part of MKE 3.7.12.

  • [47304] [StackLight] Fixed the issue with OpenSearch not storing kubelet logs due to the JSON-based format of ucp-kubelet logs.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.28.2 including the Cluster releases 16.2.6, 16.3.2, and 17.2.6.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[47202] Inspection error on bare metal hosts after dnsmasq restart

Note

Moving forward, the workaround for this issue will be moved from Release Notes to MOSK Troubleshooting Guide: Inspection error on bare metal hosts after dnsmasq restart.

If the dnsmasq pod is restarted during the bootstrap of newly added nodes, those nodes may fail to undergo inspection. That can result in inspection error in the corresponding BareMetalHost objects.

The issue can occur when:

  • The dnsmasq pod was moved to another node.

  • DHCP subnets were changed, including addition or removal. In this case, the dhcpd container of the dnsmasq pod is restarted.

    Caution

    If changing or adding of DHCP subnets is required to bootstrap new nodes, wait after changing or adding DHCP subnets until the dnsmasq pod becomes ready, then create BareMetalHost objects.

To verify whether the nodes are affected:

  1. Verify whether the BareMetalHost objects contain the inspection error:

    kubectl get bmh -n <managed-cluster-namespace-name>
    

    Example of system response:

    NAME            STATE         CONSUMER        ONLINE   ERROR              AGE
    test-master-1   provisioned   test-master-1   true                        9d
    test-master-2   provisioned   test-master-2   true                        9d
    test-master-3   provisioned   test-master-3   true                        9d
    test-worker-1   provisioned   test-worker-1   true                        9d
    test-worker-2   provisioned   test-worker-2   true                        9d
    test-worker-3   inspecting                    true     inspection error   19h
    
  2. Verify whether the dnsmasq pod was in Ready state when the inspection of the affected baremetal hosts (test-worker-3 in the example above) was started:

    kubectl -n kaas get pod <dnsmasq-pod-name> -oyaml
    

    Example of system response:

    ...
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: Initialized
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: Ready
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: ContainersReady
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: PodScheduled
      containerStatuses:
      - containerID: containerd://6dbcf2fc4b36ce4c549c9191ab01f72d0236c51d42947675302675e4bfaf4cdf
        image: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq:base-2-28-alpine-20240812132650
        imageID: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq@sha256:3dad3e278add18e69b2608e462691c4823942641a0f0e25e6811e703e3c23b3b
        lastState:
          terminated:
            containerID: containerd://816fcf079cd544acd74e312065de5b5ed4dbf1dc6159fefffff4f644b5e45987
            exitCode: 0
            finishedAt: "2024-10-11T07:38:35Z"
            reason: Completed
            startedAt: "2024-10-10T15:37:45Z"
        name: dhcpd
        ready: true
        restartCount: 2
        started: true
        state:
          running:
            startedAt: "2024-10-11T07:38:37Z"
      ...
    

    In the system response above, the dhcpd container was not ready between "2024-10-11T07:38:35Z" and "2024-10-11T07:38:54Z".

  3. Verify the affected baremetal host. For example:

    kubectl get bmh -n managed-ns test-worker-3 -oyaml
    

    Example of system response:

    ...
    status:
      errorCount: 15
      errorMessage: Introspection timeout
      errorType: inspection error
      ...
      operationHistory:
        deprovision:
          end: null
          start: null
        inspect:
          end: null
          start: "2024-10-11T07:38:19Z"
        provision:
          end: null
          start: null
        register:
          end: "2024-10-11T07:38:19Z"
          start: "2024-10-11T07:37:25Z"
    

    In the system response above, inspection was started at "2024-10-11T07:38:19Z", immediately before the period of the dhcpd container downtime. Therefore, this node is most likely affected by the issue.

Workaround

  1. Reboot the node using the IPMI reset or cycle command.

  2. If the node fails to boot, remove the failed BareMetalHost object and create it again:

    1. Remove BareMetalHost object. For example:

      kubectl delete bmh -n managed-ns test-worker-3
      
    2. Verify that the BareMetalHost object is removed:

      kubectl get bmh -n managed-ns test-worker-3
      
    3. Create a BareMetalHost object from the template. For example:

      kubectl create -f bmhc-test-worker-3.yaml
      kubectl create -f bmh-test-worker-3.yaml
      
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


Ceph
[50566] Ceph upgrade is very slow during patch or major cluster update

Due to the upstream Ceph issue 66717, during CVE upgrade of the Ceph daemon image of Ceph Reef 18.2.4, OSDs may start slow and even fail the starting probe with the following describe output in the rook-ceph-osd-X pod:

 Warning  Unhealthy  57s (x16 over 3m27s)  kubelet  Startup probe failed:
 ceph daemon health check failed with the following output:
> no valid command found; 10 closest matches:
> 0
> 1
> 2
> abort
> assert
> bluefs debug_inject_read_zeros
> bluefs files list
> bluefs stats
> bluestore bluefs device info [<alloc_size:int>]
> config diff
> admin_socket: invalid command

Workaround:

Complete the following steps during every patch or major cluster update of the Cluster releases 17.2.x, 17.3.x, and 17.4.x (until Ceph 18.2.5 becomes supported):

  1. Plan extra time in the maintenance window for the patch cluster update.

    Slow starts will still impact the update procedure, but after completing the following step, the recovery process noticeably shortens without affecting the overall cluster state and data responsiveness.

  2. Select one of the following options:

    • Before the cluster update, set the noout flag:

      ceph osd set noout
      

      Once the Ceph OSDs image upgrade is done, unset the flag:

      ceph osd unset noout
      
    • Monitor the Ceph OSDs image upgrade. If the symptoms of slow start appear, set the noout flag as soon as possible. Once the Ceph OSDs image upgrade is done, unset the flag.

[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.


LCM
[39437] Failure to replace a master node on a Container Cloud cluster

Fixed in 2.29.0 (17.4.0 and 16.4.0)

During the replacement of a master node on a cluster of any type, the process may get stuck with Kubelet's NodeReady condition is Unknown in the machine status on the remaining master nodes.

As a workaround, log in on the affected node and run the following command:

docker restart ucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>

StackLight
[47594] Patroni pods may get stuck in the CrashLoopBackOff state

Fixed in 2.28.3 (17.2.7, 16.2.7, and 16.3.3)

The Patroni pods may get stuck in the CrashLoopBackOff state due to the patroni container being terminated with reason: OOMKilled that you can see in the pod status. For example:

kubectl get pod/patroni-13-0 -n stacklight -o yaml
...
  - containerID: docker://<ID>`
    image: mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240828023010
    imageID: docker-pullable://mirantis.azurecr.io/stacklight/spilo@sha256:<ID>
    lastState:
      terminated:
        containerID: docker://<ID>
        exitCode: 137
        finishedAt: "2024-10-17T14:26:25Z"
        reason: OOMKilled
        startedAt: "2024-10-17T14:23:25Z"
    name: patroni
...

As a workaround, increase the memory limit for PostgreSQL to 20Gi in the Cluster object:

spec:
  providerSpec:
    value:
      helmReleases:
      - name: stacklight
        values:
          resources:
            postgresql:
              limits:
                memory: "20Gi"

For a detailed procedure of StackLight configuration, see MOSK Operations Guide: Configure StackLight. For description of the resources option, see MOSK Operations Guide: StackLight configuration parameters - Resource limits.

[44193] OpenSearch reaches 85% disk usage watermark affecting the cluster state

Fixed in 2.29.0 (17.4.0 and 16.4.0)

On High Availability (HA) clusters that use Local Volume Provisioner (LVP), Prometheus and OpenSearch from StackLight may share the same pool of storage. In such configuration, OpenSearch may approach the 85% disk usage watermark due to the combined storage allocation and usage patterns set by the Persistent Volume Claim (PVC) size parameters for Prometheus and OpenSearch, which consume storage the most.

When the 85% threshold is reached, the affected node is transitioned to the read-only state, preventing shard allocation and causing the OpenSearch cluster state to transition to Warning (Yellow) or Critical (Red).

Caution

The issue and the provided workaround apply only for clusters on which OpenSearch and Prometheus utilize the same storage pool.

To verify that the cluster is affected:

  1. Verify the result of the following formula:

    0.8 × OpenSearch_PVC_Size_GB + Prometheus_PVC_Size_GB > 0.85 × Total_Storage_Capacity_GB
    

    In the formula, define the following values:

    OpenSearch_PVC_Size_GB

    Derived from .values.elasticsearch.persistentVolumeUsableStorageSizeGB, defaulting to .values.elasticsearch.persistentVolumeClaimSize if unspecified. To obtain the OpenSearch PVC size:

    kubectl -n <namespaceName> get cluster <clusterName> -o yaml |\
    yq '.spec.providerSpec.value.helmReleases[] | select(.name == "stacklight") | .values.elasticsearch.persistentVolumeClaimSize '
    

    Example of system response:

    10000Gi
    
    Prometheus_PVC_Size_GB

    Sourced from .values.prometheusServer.persistentVolumeClaimSize. To obtain the Prometheus PVC size:

    kubectl -n <namespaceName> get cluster <clusterName> -o yaml |\
    yq '.spec.providerSpec.value.helmReleases[] | select(.name == "stacklight") | .values.prometheusServer.persistentVolumeClaimSize '
    

    Example of system response:

    4000Gi
    
    Total_Storage_Capacity_GB

    Total capacity of the OpenSearch PVCs. For LVP, the capacity of the storage pool. To obtain the total capacity:

    kubectl get pvc -n stacklight -l app=opensearch-master \
    -o custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage
    

    The system response contains multiple outputs, one per opensearch-master node. Select the capacity for the affected node.

    Note

    Convert the values to GB if they are set in different units.

    If the formula result is positive, it is an early indication that the cluster is affected.

  2. Verify whether the OpenSearchClusterStatusWarning or OpenSearchClusterStatusCritical alert is firing. And if so, verify the following:

    1. Log in to the OpenSearch web UI.

    2. In Management -> Dev Tools, run the following command:

      GET _cluster/allocation/explain
      

      The following system response indicates that the corresponding node is affected:

      "explanation": "the node is above the low watermark cluster setting \
      [cluster.routing.allocation.disk.watermark.low=85%], using more disk space \
      than the maximum allowed [85.0%], actual free: [xx.xxx%]"
      

      Note

      The system response may contain even higher watermark percent than 85.0%, depending on the case.

Workaround:

Warning

The workaround implies adjustement of the retention threshold for OpenSearch. And depending on the new threshold, some old logs will be deleted.

  1. Adjust or set .values.elasticsearch.persistentVolumeUsableStorageSizeGB to a lower value for the affection check formula to be non-positive. For configuration details, see MOSK Operations Guide: StackLight configuration parameters - OpenSearch.

    Mirantis also recommends reserving some space for other PVCs using storage from the pool. Use the following formula to calculate the required space:

    persistentVolumeUsableStorageSizeGB =
    0.84 × ((1 - Reserved_Percentage - Filesystem_Reserve) ×
    Total_Storage_Capacity_GB - Prometheus_PVC_Size_GB) /
    0.8
    

    In the formula, define the following values:

    Reserved_Percentage

    A user-defined variable that specifies what percentage of the total storage capacity should not be used by OpenSearch or Prometheus. This is used to reserve space for other components. It should be expressed as a decimal. For example, for 5% of reservation, Reserved_Percentage is 0.05. Mirantis recommends using 0.05 as a starting point.

    Filesystem_Reserve

    Percentage to deduct for filesystems that may reserve some portion of the available storage, which is marked as occupied. For example, for EXT4, it is 5% by default, so the value must be 0.05.

    Prometheus_PVC_Size_GB

    Sourced from .values.prometheusServer.persistentVolumeClaimSize.

    Total_Storage_Capacity_GB

    Total capacity of the OpenSearch PVCs. For LVP, the capacity of the storage pool. To obtain the total capacity:

    kubectl get pvc -n stacklight -l app=opensearch-master \
    -o custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage
    

    The system response contains multiple outputs, one per opensearch-master node. Select the capacity for the affected node.

    Note

    Convert the values to GB if they are set in different units.

    Calculation of above formula provides a maximum safe storage to allocate for .values.elasticsearch.persistentVolumeUsableStorageSizeGB. Use this formula as a reference for setting .values.elasticsearch.persistentVolumeUsableStorageSizeGB on a cluster.

  2. Wait up to 15-20 mins for OpenSearch to perform the cleaning.

  3. Verify that the cluster is not affected anymore using the procedure above.


Container Cloud web UI
[50181] Failure to deploy a compact cluster

A compact MOSK cluster fails to be deployed through the Container Cloud web UI due to inability to add any label to the control plane machines along with inability to change dedicatedControlPlane: false using the web UI.

To work around the issue, manually add the required labels using CLI. Once done, the cluster deployment resumes.

[50168] Inability to use a new project right after creation

A newly created project does not display all available tabs in the Container Cloud web UI and contains different access denied errors during first five minutes after creation.

To work around the issue, refresh the browser in five minutes after the project creation.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.28.2. For artifacts of the Cluster releases introduced in 2.28.2, see patch Cluster releases 17.2.6, 16.3.2, and 16.2.6.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries Updated

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-antelope-jammy-debug-20241028161924

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-antelope-jammy-debug-20241028161924

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.41.22.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.41.22.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.41.22.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.41.22.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.41.22.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.41.22

baremetal-dnsmasq

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-28-alpine-20241022121257

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-2-28-alpine-20241022120949

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-2-28-alpine-20241022120001

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.41.22

ironic Updated

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20241023091304

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:antelope-jammy-20241023091304

ironic-prometheus-exporter Updated

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240913123302

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-2-28-alpine-20241022122006

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-34a4f54-20240910081335

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-jammy-20240927170336

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20241022120929

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.41.22.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.41.22.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.41.22.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.41.22.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.41.22.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.41.22.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.41.22.tgz

credentials-controller

https://binary.mirantis.com/core/helm/credentials-controller-1.41.22.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.41.22.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.41.22.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.41.22.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.41.22.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.41.22.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.41.22.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.41.22.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.41.22.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.41.22.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.41.22.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.41.22.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.41.22.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.41.22.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.41.22.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.41.22.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.41.22.tgz

secret-controller

https://binary.mirantis.com/core/helm/secret-controller-1.41.22.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.41.22.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.41.22

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.41.22

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.41.22

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-8

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.41.22

credentials-controller Updated

mirantis.azurecr.io/core/credentials-controller:1.41.22

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.41.22

frontend Updated

mirantis.azurecr.io/core/frontend:1.41.22

host-os-modules-controller Updated

mirantis.azurecr.io/core/host-os-modules-controller:1.41.22

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.41.22

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.41.22

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.41.22

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.41.22

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.41.22

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.41.22

mcc-cache-warmup Updated

mirantis.azurecr.io/core/mcc-cache-warmup:1.41.22

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.41.22

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.41.22

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.41.22

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.41.22

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-14

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.41.22

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.41.22

secret-controller Updated

mirantis.azurecr.io/core/secret-controller:1.41.22

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.41.22

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/core/helm/iam-1.41.22.tgz

Docker images

kubectl

mirantis.azurecr.io/general/kubectl:20240926142019

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-ba8ada4-20240405150338

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240909113408

mcc-keycloak

mirantis.azurecr.io/iam/mcc-keycloak:25.0.6-20240926140203

2.28.1

Important

For MOSK clusters, Container Cloud 2.28.1 is the continuation for MOSK 24.2.x series using the patch Cluster release 17.2.5. For the update path of 24.1, 24.2, and 24.3 series, see MOSK documentation: Release Compatibility Matrix - Managed cluster update schema.

The management cluster of a MOSK 24.2.x cluster is automatically updated to the latest patch Cluster release 16.3.1.

The Container Cloud patch release 2.28.1, which is based on the 2.28.0 major release, provides the following updates:

  • Support for the patch Cluster release 16.3.1.

  • Support for the patch Cluster releases 16.2.5 and 17.2.5 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 24.2.3.

  • Support for MKE 3.7.15.

  • Bare metal: update of Ubuntu mirror from 2024-09-11-014225 to ubuntu-2024-10-14-013948 along with update of minor kernel version from 5.15.0-119-generic to 5.15.0-122-generic.

  • Security fixes for CVEs in images.

This patch release also supports the latest major Cluster releases 17.3.0 and 16.3.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster release instead.

For main deliverables of the parent Container Cloud release of 2.28.1, refer to 2.28.0.

Security notes

In total, since Container Cloud 2.28.0, 400 Common Vulnerabilities and Exposures (CVE) have been fixed in 2.28.1: 46 of critical and 354 of high severity.

The table below includes the total numbers of addressed unique and common CVEs in images by product component since Container Cloud 2.28.0. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

0

1

1

Common

0

4

4

Kaas core

Unique

1

14

15

Common

1

118

119

StackLight

Unique

8

40

48

Common

45

232

277

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.2.3: Security notes.

Addressed issues

The following issues have been addressed in the Container Cloud patch release 2.28.1 along with the patch Cluster releases 16.3.1, 16.2.5, and 17.2.5.

  • [46808] [LCM] Fixed the issue with old kernel metapackages remaining on the cluster after kernel upgrade.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.28.1 including the Cluster releases 16.2.5, 16.3.1, and 17.2.5.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[47202] Inspection error on bare metal hosts after dnsmasq restart

Note

Moving forward, the workaround for this issue will be moved from Release Notes to MOSK Troubleshooting Guide: Inspection error on bare metal hosts after dnsmasq restart.

If the dnsmasq pod is restarted during the bootstrap of newly added nodes, those nodes may fail to undergo inspection. That can result in inspection error in the corresponding BareMetalHost objects.

The issue can occur when:

  • The dnsmasq pod was moved to another node.

  • DHCP subnets were changed, including addition or removal. In this case, the dhcpd container of the dnsmasq pod is restarted.

    Caution

    If changing or adding of DHCP subnets is required to bootstrap new nodes, wait after changing or adding DHCP subnets until the dnsmasq pod becomes ready, then create BareMetalHost objects.

To verify whether the nodes are affected:

  1. Verify whether the BareMetalHost objects contain the inspection error:

    kubectl get bmh -n <managed-cluster-namespace-name>
    

    Example of system response:

    NAME            STATE         CONSUMER        ONLINE   ERROR              AGE
    test-master-1   provisioned   test-master-1   true                        9d
    test-master-2   provisioned   test-master-2   true                        9d
    test-master-3   provisioned   test-master-3   true                        9d
    test-worker-1   provisioned   test-worker-1   true                        9d
    test-worker-2   provisioned   test-worker-2   true                        9d
    test-worker-3   inspecting                    true     inspection error   19h
    
  2. Verify whether the dnsmasq pod was in Ready state when the inspection of the affected baremetal hosts (test-worker-3 in the example above) was started:

    kubectl -n kaas get pod <dnsmasq-pod-name> -oyaml
    

    Example of system response:

    ...
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: Initialized
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: Ready
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: ContainersReady
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: PodScheduled
      containerStatuses:
      - containerID: containerd://6dbcf2fc4b36ce4c549c9191ab01f72d0236c51d42947675302675e4bfaf4cdf
        image: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq:base-2-28-alpine-20240812132650
        imageID: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq@sha256:3dad3e278add18e69b2608e462691c4823942641a0f0e25e6811e703e3c23b3b
        lastState:
          terminated:
            containerID: containerd://816fcf079cd544acd74e312065de5b5ed4dbf1dc6159fefffff4f644b5e45987
            exitCode: 0
            finishedAt: "2024-10-11T07:38:35Z"
            reason: Completed
            startedAt: "2024-10-10T15:37:45Z"
        name: dhcpd
        ready: true
        restartCount: 2
        started: true
        state:
          running:
            startedAt: "2024-10-11T07:38:37Z"
      ...
    

    In the system response above, the dhcpd container was not ready between "2024-10-11T07:38:35Z" and "2024-10-11T07:38:54Z".

  3. Verify the affected baremetal host. For example:

    kubectl get bmh -n managed-ns test-worker-3 -oyaml
    

    Example of system response:

    ...
    status:
      errorCount: 15
      errorMessage: Introspection timeout
      errorType: inspection error
      ...
      operationHistory:
        deprovision:
          end: null
          start: null
        inspect:
          end: null
          start: "2024-10-11T07:38:19Z"
        provision:
          end: null
          start: null
        register:
          end: "2024-10-11T07:38:19Z"
          start: "2024-10-11T07:37:25Z"
    

    In the system response above, inspection was started at "2024-10-11T07:38:19Z", immediately before the period of the dhcpd container downtime. Therefore, this node is most likely affected by the issue.

Workaround

  1. Reboot the node using the IPMI reset or cycle command.

  2. If the node fails to boot, remove the failed BareMetalHost object and create it again:

    1. Remove BareMetalHost object. For example:

      kubectl delete bmh -n managed-ns test-worker-3
      
    2. Verify that the BareMetalHost object is removed:

      kubectl get bmh -n managed-ns test-worker-3
      
    3. Create a BareMetalHost object from the template. For example:

      kubectl create -f bmhc-test-worker-3.yaml
      kubectl create -f bmh-test-worker-3.yaml
      
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


Ceph
[50566] Ceph upgrade is very slow during patch or major cluster update

Due to the upstream Ceph issue 66717, during CVE upgrade of the Ceph daemon image of Ceph Reef 18.2.4, OSDs may start slow and even fail the starting probe with the following describe output in the rook-ceph-osd-X pod:

 Warning  Unhealthy  57s (x16 over 3m27s)  kubelet  Startup probe failed:
 ceph daemon health check failed with the following output:
> no valid command found; 10 closest matches:
> 0
> 1
> 2
> abort
> assert
> bluefs debug_inject_read_zeros
> bluefs files list
> bluefs stats
> bluestore bluefs device info [<alloc_size:int>]
> config diff
> admin_socket: invalid command

Workaround:

Complete the following steps during every patch or major cluster update of the Cluster releases 17.2.x, 17.3.x, and 17.4.x (until Ceph 18.2.5 becomes supported):

  1. Plan extra time in the maintenance window for the patch cluster update.

    Slow starts will still impact the update procedure, but after completing the following step, the recovery process noticeably shortens without affecting the overall cluster state and data responsiveness.

  2. Select one of the following options:

    • Before the cluster update, set the noout flag:

      ceph osd set noout
      

      Once the Ceph OSDs image upgrade is done, unset the flag:

      ceph osd unset noout
      
    • Monitor the Ceph OSDs image upgrade. If the symptoms of slow start appear, set the noout flag as soon as possible. Once the Ceph OSDs image upgrade is done, unset the flag.

[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.


LCM
[47741] Upgrade to MKE 3.7.15 is blocked by ucp-upgrade-check-images

Fixed in 2.28.2 (17.2.6, 16.2.6, and 16.3.2)

Upgrade from MKE 3.7.12 to 3.7.15 may get stuck due to the leftover ucp-upgrade-check-images service that is part of MKE 3.7.12.

As a workaround, on any master node, remove the leftover service using the docker service rm ucp-upgrade-check-images command.

[39437] Failure to replace a master node on a Container Cloud cluster

Fixed in 2.29.0 (17.4.0 and 16.4.0)

During the replacement of a master node on a cluster of any type, the process may get stuck with Kubelet's NodeReady condition is Unknown in the machine status on the remaining master nodes.

As a workaround, log in on the affected node and run the following command:

docker restart ucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>

StackLight
[47594] Patroni pods may get stuck in the CrashLoopBackOff state

Fixed in 2.28.3 (17.2.7, 16.2.7, and 16.3.3)

The Patroni pods may get stuck in the CrashLoopBackOff state due to the patroni container being terminated with reason: OOMKilled that you can see in the pod status. For example:

kubectl get pod/patroni-13-0 -n stacklight -o yaml
...
  - containerID: docker://<ID>`
    image: mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240828023010
    imageID: docker-pullable://mirantis.azurecr.io/stacklight/spilo@sha256:<ID>
    lastState:
      terminated:
        containerID: docker://<ID>
        exitCode: 137
        finishedAt: "2024-10-17T14:26:25Z"
        reason: OOMKilled
        startedAt: "2024-10-17T14:23:25Z"
    name: patroni
...

As a workaround, increase the memory limit for PostgreSQL to 20Gi in the Cluster object:

spec:
  providerSpec:
    value:
      helmReleases:
      - name: stacklight
        values:
          resources:
            postgresql:
              limits:
                memory: "20Gi"

For a detailed procedure of StackLight configuration, see MOSK Operations Guide: Configure StackLight. For description of the resources option, see MOSK Operations Guide: StackLight configuration parameters - Resource limits.

[47304] OpenSearch does not store kubelet logs

Fixed in 2.28.2 (17.2.6, 16.2.6, and 16.3.2)

Due to the JSON-based format of ucp-kubelet logs, OpenSearch does not store kubelet logs. Mirantis is working on the issue and will deliver the resolution in one of the nearest patch releases.

[44193] OpenSearch reaches 85% disk usage watermark affecting the cluster state

Fixed in 2.29.0 (17.4.0 and 16.4.0)

On High Availability (HA) clusters that use Local Volume Provisioner (LVP), Prometheus and OpenSearch from StackLight may share the same pool of storage. In such configuration, OpenSearch may approach the 85% disk usage watermark due to the combined storage allocation and usage patterns set by the Persistent Volume Claim (PVC) size parameters for Prometheus and OpenSearch, which consume storage the most.

When the 85% threshold is reached, the affected node is transitioned to the read-only state, preventing shard allocation and causing the OpenSearch cluster state to transition to Warning (Yellow) or Critical (Red).

Caution

The issue and the provided workaround apply only for clusters on which OpenSearch and Prometheus utilize the same storage pool.

To verify that the cluster is affected:

  1. Verify the result of the following formula:

    0.8 × OpenSearch_PVC_Size_GB + Prometheus_PVC_Size_GB > 0.85 × Total_Storage_Capacity_GB
    

    In the formula, define the following values:

    OpenSearch_PVC_Size_GB

    Derived from .values.elasticsearch.persistentVolumeUsableStorageSizeGB, defaulting to .values.elasticsearch.persistentVolumeClaimSize if unspecified. To obtain the OpenSearch PVC size:

    kubectl -n <namespaceName> get cluster <clusterName> -o yaml |\
    yq '.spec.providerSpec.value.helmReleases[] | select(.name == "stacklight") | .values.elasticsearch.persistentVolumeClaimSize '
    

    Example of system response:

    10000Gi
    
    Prometheus_PVC_Size_GB

    Sourced from .values.prometheusServer.persistentVolumeClaimSize. To obtain the Prometheus PVC size:

    kubectl -n <namespaceName> get cluster <clusterName> -o yaml |\
    yq '.spec.providerSpec.value.helmReleases[] | select(.name == "stacklight") | .values.prometheusServer.persistentVolumeClaimSize '
    

    Example of system response:

    4000Gi
    
    Total_Storage_Capacity_GB

    Total capacity of the OpenSearch PVCs. For LVP, the capacity of the storage pool. To obtain the total capacity:

    kubectl get pvc -n stacklight -l app=opensearch-master \
    -o custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage
    

    The system response contains multiple outputs, one per opensearch-master node. Select the capacity for the affected node.

    Note

    Convert the values to GB if they are set in different units.

    If the formula result is positive, it is an early indication that the cluster is affected.

  2. Verify whether the OpenSearchClusterStatusWarning or OpenSearchClusterStatusCritical alert is firing. And if so, verify the following:

    1. Log in to the OpenSearch web UI.

    2. In Management -> Dev Tools, run the following command:

      GET _cluster/allocation/explain
      

      The following system response indicates that the corresponding node is affected:

      "explanation": "the node is above the low watermark cluster setting \
      [cluster.routing.allocation.disk.watermark.low=85%], using more disk space \
      than the maximum allowed [85.0%], actual free: [xx.xxx%]"
      

      Note

      The system response may contain even higher watermark percent than 85.0%, depending on the case.

Workaround:

Warning

The workaround implies adjustement of the retention threshold for OpenSearch. And depending on the new threshold, some old logs will be deleted.

  1. Adjust or set .values.elasticsearch.persistentVolumeUsableStorageSizeGB to a lower value for the affection check formula to be non-positive. For configuration details, see MOSK Operations Guide: StackLight configuration parameters - OpenSearch.

    Mirantis also recommends reserving some space for other PVCs using storage from the pool. Use the following formula to calculate the required space:

    persistentVolumeUsableStorageSizeGB =
    0.84 × ((1 - Reserved_Percentage - Filesystem_Reserve) ×
    Total_Storage_Capacity_GB - Prometheus_PVC_Size_GB) /
    0.8
    

    In the formula, define the following values:

    Reserved_Percentage

    A user-defined variable that specifies what percentage of the total storage capacity should not be used by OpenSearch or Prometheus. This is used to reserve space for other components. It should be expressed as a decimal. For example, for 5% of reservation, Reserved_Percentage is 0.05. Mirantis recommends using 0.05 as a starting point.

    Filesystem_Reserve

    Percentage to deduct for filesystems that may reserve some portion of the available storage, which is marked as occupied. For example, for EXT4, it is 5% by default, so the value must be 0.05.

    Prometheus_PVC_Size_GB

    Sourced from .values.prometheusServer.persistentVolumeClaimSize.

    Total_Storage_Capacity_GB

    Total capacity of the OpenSearch PVCs. For LVP, the capacity of the storage pool. To obtain the total capacity:

    kubectl get pvc -n stacklight -l app=opensearch-master \
    -o custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage
    

    The system response contains multiple outputs, one per opensearch-master node. Select the capacity for the affected node.

    Note

    Convert the values to GB if they are set in different units.

    Calculation of above formula provides a maximum safe storage to allocate for .values.elasticsearch.persistentVolumeUsableStorageSizeGB. Use this formula as a reference for setting .values.elasticsearch.persistentVolumeUsableStorageSizeGB on a cluster.

  2. Wait up to 15-20 mins for OpenSearch to perform the cleaning.

  3. Verify that the cluster is not affected anymore using the procedure above.


Container Cloud web UI
[50181] Failure to deploy a compact cluster

A compact MOSK cluster fails to be deployed through the Container Cloud web UI due to inability to add any label to the control plane machines along with inability to change dedicatedControlPlane: false using the web UI.

To work around the issue, manually add the required labels using CLI. Once done, the cluster deployment resumes.

[50168] Inability to use a new project right after creation

A newly created project does not display all available tabs in the Container Cloud web UI and contains different access denied errors during first five minutes after creation.

To work around the issue, refresh the browser in five minutes after the project creation.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.28.1. For artifacts of the Cluster releases introduced in 2.28.1, see patch Cluster releases 17.2.5, 16.3.1, and 16.2.5.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries Updated

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20241014163420

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20241014163420

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.41.18.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.41.18.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.41.18.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.41.18.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.41.18.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.41.18

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-28-alpine-20241022121257

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-28-alpine-20241022120949

bm-collective Updated

mirantis.azurecr.io/bm/bm-collective:base-2-28-alpine-20241022120001

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.41.18

ironic Updated

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240927160001

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:antelope-jammy-20240927160001

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240819102310

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-2-28-alpine-20241022122006

kubernetes-entrypoint Updated

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-34a4f54-20240910081335

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.17-jammy-20240927170336

syslog-ng Updated

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20241022120929

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.41.18.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.41.18.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.41.18.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.41.18.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.41.18.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.41.18.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.41.18.tgz

credentials-controller

https://binary.mirantis.com/core/helm/credentials-controller-1.41.18.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.41.18.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.41.18.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.41.18.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.41.18.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.41.18.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.41.18.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.41.18.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.41.18.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.41.18.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.41.18.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.41.18.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.41.18.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.41.18.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.41.18.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.41.18.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.41.18.tgz

secret-controller

https://binary.mirantis.com/core/helm/secret-controller-1.41.18.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.41.18.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.41.18

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.41.18

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.41.18

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-8

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.41.18

credentials-controller Updated

mirantis.azurecr.io/core/credentials-controller:1.41.18

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.41.18

frontend Updated

mirantis.azurecr.io/core/frontend:1.41.18

host-os-modules-controller Updated

mirantis.azurecr.io/core/host-os-modules-controller:1.41.18

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.41.18

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.41.18

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.41.18

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.41.18

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.41.18

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.41.18

mcc-cache-warmup Updated

mirantis.azurecr.io/core/mcc-cache-warmup:1.41.18

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.41.18

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.41.18

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.41.18

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.41.18

registry Updated

mirantis.azurecr.io/lcm/registry:v2.8.1-14

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.41.18

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.41.18

secret-controller Updated

mirantis.azurecr.io/core/secret-controller:1.41.18

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.41.18

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/core/helm/iam-1.41.18.tgz

Docker images

kubectl

mirantis.azurecr.io/general/kubectl:20240926142019

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-ba8ada4-20240405150338

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240909113408

mcc-keycloak

mirantis.azurecr.io/iam/mcc-keycloak:25.0.6-20240926140203

2.28.0

The Mirantis Container Cloud major release 2.28.0:

  • Introduces support for the Cluster release 17.3.0 that is based on the Cluster release 16.3.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 24.3.

  • Introduces support for the Cluster release 16.3.0 that is based on Mirantis Container Runtime (MCR) 23.0.14 and Mirantis Kubernetes Engine (MKE) 3.7.12 with Kubernetes 1.27.

  • Does not support greenfield deployments on deprecated Cluster releases of the 17.2.x and 16.2.x series. Use the latest available Cluster releases of the series instead.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.28.0.

Enhancements

This section outlines new features and enhancements introduced in the Container Cloud release 2.28.0. For the list of enhancements delivered with the Cluster releases introduced by Container Cloud 2.28.0, see 17.3.0 and 16.3.0.

General availability for Ubuntu 22.04 on MOSK clusters

Implemented full support for Ubuntu 22.04 LTS (Jammy Jellyfish) as the default host operating system in MOSK clusters, including greenfield deployments and update from Ubuntu 20.04 to 22.04 on existing clusters.

Ubuntu 20.04 is deprecated for greenfield deployments and supported during the MOSK 24.3 release cycle only for existing clusters.

Warning

During the course of the Container Cloud 2.28.x series, Mirantis highly recommends upgrading an operating system on any nodes of all your managed cluster machines to Ubuntu 22.04 before the next major Cluster release becomes available.

It is not mandatory to upgrade all machines at once. You can upgrade them one by one or in small batches, for example, if the maintenance window is limited in time.

Otherwise, the Cluster release update of the Ubuntu 20.04-based managed clusters will become impossible as of Container Cloud 2.29.0 with Ubuntu 22.04 as the only supported version.

Management cluster update to Container Cloud 2.29.1 will be blocked if at least one node of any related managed cluster is running Ubuntu 20.04.

Note

Since Container Cloud 2.27.0 (Cluster release 16.2.0), existing MOSK management clusters were automatically updated to Ubuntu 22.04 during cluster upgrade. Greenfield deployments of management clusters are also based on Ubuntu 22.04.

Day-2 operations for bare metal: updating modules

TechPreview

Implemented the capability to update custom modules using deprecation. Once you create a new custom module, you can use it to deprecate another module by adding the deprecates field to metadata.yaml of the new module. The related HostOSConfiguration and HostOSConfigurationModules objects reflect the deprecation status of new and old modules using the corresponding fields in spec and status sections.

Also, added monitoring of deprecated modules by implementing the StackLight metrics for the Host Operating System Modules Controller along with the Day2ManagementControllerTargetDown and Day2ManagementDeprecatedConfigs alerts to notify the cloud operator about detected deprecations and issues with host-os-modules-controller.

Note

Deprecation is soft, meaning that no actual restrictions are applied to the usage of a deprecated module.

Caution

Deprecating a version automatically deprecates all lower SemVer versions of the specified module.

Day-2 operations for bare metal: configuration enhancements for modules

TechPreview

Introduced the following configuration enhancements for custom modules:

  • Module-specific Ansible configuration

    Updated the Ansible execution mechanism for running any modules. The default ansible.cfg file is now placed in /etc/ansible/mcc.cfg and used for execution of lcm-ansible and day-2 modules. However, if a module has its own ansible.cfg in the module root folder, such configuration is used for the module execution instead of the default one.

  • Configuration of supported operating system distribution

    Added the supportedDistributions to the metadata section of a module custom resource to define the list of supported operating system distributions for the module. This field is informative and does not block the module execution on machines running non-supported distributions, but such execution will be most probably completed with an error.

  • Separate flag for machines requiring reboot

    Introduced a separate /run/day2/reboot-required file for day-2 modules to add a notification about required reboot for a machine and a reason for reboot that appear after the module execution. The feature allows for separation of the reboot reason between LCM and day-2 operations.

Update group for controller nodes

TechPreview

Implemented the update group for controller nodes using the UpdateGroup resource, which is automatically generated during initial cluster creation with the following settings:

  • Name: <cluster-name>-control

  • Index: 1

  • Concurrent updates: 1

This feature decouples the concurrency settings from the global cluster level and provides update flexibility.

All control plane nodes are automatically assigned to the control update group with no possibility to change it.

Note

On existing clusters created before 2.28.0 (Cluster releases 17.2.0, 16.2.0, or earlier), the control update group is created after upgrade of the Container Cloud release to 2.28.0 (Cluster release 16.3.0) on the management cluster.

Reboot of machines using update groups

TechPreview

Implemented the rebootIfUpdateRequires parameter for the UpdateGroup custom resource. The parameter allows for rebooting a set of controller or worker machines added to an update group during a Cluster release update that requires a reboot, for example, when kernel version update is available in the target Cluster release. The feature reduces manual intervention and overall downtime during cluster update.

Note

By default, rebootIfUpdateRequires is set to false on managed clusters and to true on management clusters.

Self-diagnostics for management and managed clusters

Implemented the Diagnostic Controller that is a tool with a set of diagnostic checks to perform self-diagnostics of any Container Cloud cluster and help the operator to easily understand, troubleshoot, and resolve potential issues against the following major subsystems: core, bare metal, Ceph, StackLight, Tungsten Fabric, and OpenStack. The Diagnostic Controller analyzes the configuration of the cluster subsystems and reports results of checks that contain useful information about cluster health.

Running self-diagnostics on both management and managed clusters is essential to ensure the overall health and optimal performance of your cluster. Mirantis recommends running self-diagnostics before cluster update, node replacement, or any other significant changes in the cluster to prevent potential issues and optimize maintenance window.

Configuration of groups in auditd

TechPreview

Simplified the default auditd configuration by implementing the preset groups that you can use in presetRules instead of exact names or the virtual group all. The feature allows enabling a limited set of presets using a single keyword (group name).

Also, optimized disk usage by removing the following Docker rule that was removed from the Docker CIS Benchmark 1.3.0 due to producing excessive events:

# 1.2.4 Ensure auditing is configured for Docker files and directories - /var/lib/docker
-w /var/lib/docker -k docker
Amendments for the ClusterUpdatePlan object

TechPreview

Enhanced the ClusterUpdatePlan object by adding a separate update step for each UpdateGroup of worker nodes of a managed cluster. The feature allows the operator to granularly control the update process and its impact on workloads, with the option to pause the update after each step.

Also, added several StackLight alerts to notify the operator about the update progress and potential update issues.

Refactoring of delayed auto-update of a management cluster

Refactored the MCCUpgrade object by implementing a new mechanism to delay Container Cloud release updates. You now have the following options for auto-update of a management cluster:

  • Automatically update a cluster on the publish day of a new release (by default).

  • Set specific days and hours for an auto-update allowing delays of up to one week. For example, if a release becomes available on Monday, you can delay it until Sunday by setting Sunday as the only permitted day for auto-updates.

  • Delay auto-update for minimum 20 days for each newly discovered release. The exact number of delay days is set in the release metadata and cannot be changed by the user. It depends on the specifics of each release cycle and on optional configuration of week days and hours selected for update.

    You can verify the exact date of a scheduled auto-update either in the Status section of the Management Cluster Updates page in the web UI or in the status section of the MCCUpgrade object.

  • Combine auto-update delay with the specific days and hours setting (two previous options).

Also, optimized monitoring of auto-update by implementing several StackLight metrics for the kaas-exporter job along with the MCCUpdateBlocked and MCCUpdateScheduled alerts to notify the cloud operator about new releases as well as other important information about management cluster auto-update.

Container Cloud web UI enhancements for the bare metal provider

Refactored and improved UX visibility as well as added the following functionality for the bare metal managed clusters in the Container Cloud web UI:

  • Reworked the Create Subnets page:

    • Added the possibility to delete a subnet when it is not used by a cluster

    • Changed the default value of Use whole CIDR from true to false

    • Added storage subnet types: Storage access and Storage replication

  • Added the MetalLB Configs tab with configuration fields for MetalLB on the Networks page

  • Optimized the Create new machine form

  • Replicated the Create Credential form on the Baremetal page for easy access

  • Added the Labels fields to the Create L2 template and Create host profile forms as well as optimized uploading of specification data for these objects

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added documentation on how to run Ceph performance tests using Kubernetes batch or cron jobs that run fio processes according to a predefined KaaSCephOperationRequest CR.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.28.0 along with the Cluster releases 17.3.0 and 16.3.0.

Note

This section provides descriptions of issues addressed since the last Container Cloud patch release 2.27.4.

For details on addressed issues in earlier patch releases since 2.27.0, which are also included into the major release 2.28.0, refer to 2.27.x patch releases.

  • [41305] [Bare metal] Fixed the issue with newly added management cluster nodes failing to undergo provisioning if the management cluster nodes were configured with a single L2 segment used for all network traffic (PXE and LCM/management networks).

  • [46245] [Bare metal] Fixed the issue with lack of permissions for serviceuser and users with the global-admin and operator roles to fetch HostOSConfigurationModules and HostOSConfiguration custom resources.

  • [43164] [StackLight] Fixed the issue with rollover policy not being added to indicies created without a policy.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.28.0 including the Cluster releases 17.3.0 and 16.3.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[47202] Inspection error on bare metal hosts after dnsmasq restart

Note

Moving forward, the workaround for this issue will be moved from Release Notes to MOSK Troubleshooting Guide: Inspection error on bare metal hosts after dnsmasq restart.

If the dnsmasq pod is restarted during the bootstrap of newly added nodes, those nodes may fail to undergo inspection. That can result in inspection error in the corresponding BareMetalHost objects.

The issue can occur when:

  • The dnsmasq pod was moved to another node.

  • DHCP subnets were changed, including addition or removal. In this case, the dhcpd container of the dnsmasq pod is restarted.

    Caution

    If changing or adding of DHCP subnets is required to bootstrap new nodes, wait after changing or adding DHCP subnets until the dnsmasq pod becomes ready, then create BareMetalHost objects.

To verify whether the nodes are affected:

  1. Verify whether the BareMetalHost objects contain the inspection error:

    kubectl get bmh -n <managed-cluster-namespace-name>
    

    Example of system response:

    NAME            STATE         CONSUMER        ONLINE   ERROR              AGE
    test-master-1   provisioned   test-master-1   true                        9d
    test-master-2   provisioned   test-master-2   true                        9d
    test-master-3   provisioned   test-master-3   true                        9d
    test-worker-1   provisioned   test-worker-1   true                        9d
    test-worker-2   provisioned   test-worker-2   true                        9d
    test-worker-3   inspecting                    true     inspection error   19h
    
  2. Verify whether the dnsmasq pod was in Ready state when the inspection of the affected baremetal hosts (test-worker-3 in the example above) was started:

    kubectl -n kaas get pod <dnsmasq-pod-name> -oyaml
    

    Example of system response:

    ...
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: Initialized
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: Ready
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: ContainersReady
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: PodScheduled
      containerStatuses:
      - containerID: containerd://6dbcf2fc4b36ce4c549c9191ab01f72d0236c51d42947675302675e4bfaf4cdf
        image: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq:base-2-28-alpine-20240812132650
        imageID: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq@sha256:3dad3e278add18e69b2608e462691c4823942641a0f0e25e6811e703e3c23b3b
        lastState:
          terminated:
            containerID: containerd://816fcf079cd544acd74e312065de5b5ed4dbf1dc6159fefffff4f644b5e45987
            exitCode: 0
            finishedAt: "2024-10-11T07:38:35Z"
            reason: Completed
            startedAt: "2024-10-10T15:37:45Z"
        name: dhcpd
        ready: true
        restartCount: 2
        started: true
        state:
          running:
            startedAt: "2024-10-11T07:38:37Z"
      ...
    

    In the system response above, the dhcpd container was not ready between "2024-10-11T07:38:35Z" and "2024-10-11T07:38:54Z".

  3. Verify the affected baremetal host. For example:

    kubectl get bmh -n managed-ns test-worker-3 -oyaml
    

    Example of system response:

    ...
    status:
      errorCount: 15
      errorMessage: Introspection timeout
      errorType: inspection error
      ...
      operationHistory:
        deprovision:
          end: null
          start: null
        inspect:
          end: null
          start: "2024-10-11T07:38:19Z"
        provision:
          end: null
          start: null
        register:
          end: "2024-10-11T07:38:19Z"
          start: "2024-10-11T07:37:25Z"
    

    In the system response above, inspection was started at "2024-10-11T07:38:19Z", immediately before the period of the dhcpd container downtime. Therefore, this node is most likely affected by the issue.

Workaround

  1. Reboot the node using the IPMI reset or cycle command.

  2. If the node fails to boot, remove the failed BareMetalHost object and create it again:

    1. Remove BareMetalHost object. For example:

      kubectl delete bmh -n managed-ns test-worker-3
      
    2. Verify that the BareMetalHost object is removed:

      kubectl get bmh -n managed-ns test-worker-3
      
    3. Create a BareMetalHost object from the template. For example:

      kubectl create -f bmhc-test-worker-3.yaml
      kubectl create -f bmh-test-worker-3.yaml
      
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


Ceph
[50566] Ceph upgrade is very slow during patch or major cluster update

Due to the upstream Ceph issue 66717, during CVE upgrade of the Ceph daemon image of Ceph Reef 18.2.4, OSDs may start slow and even fail the starting probe with the following describe output in the rook-ceph-osd-X pod:

 Warning  Unhealthy  57s (x16 over 3m27s)  kubelet  Startup probe failed:
 ceph daemon health check failed with the following output:
> no valid command found; 10 closest matches:
> 0
> 1
> 2
> abort
> assert
> bluefs debug_inject_read_zeros
> bluefs files list
> bluefs stats
> bluestore bluefs device info [<alloc_size:int>]
> config diff
> admin_socket: invalid command

Workaround:

Complete the following steps during every patch or major cluster update of the Cluster releases 17.2.x, 17.3.x, and 17.4.x (until Ceph 18.2.5 becomes supported):

  1. Plan extra time in the maintenance window for the patch cluster update.

    Slow starts will still impact the update procedure, but after completing the following step, the recovery process noticeably shortens without affecting the overall cluster state and data responsiveness.

  2. Select one of the following options:

    • Before the cluster update, set the noout flag:

      ceph osd set noout
      

      Once the Ceph OSDs image upgrade is done, unset the flag:

      ceph osd unset noout
      
    • Monitor the Ceph OSDs image upgrade. If the symptoms of slow start appear, set the noout flag as soon as possible. Once the Ceph OSDs image upgrade is done, unset the flag.

[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.


LCM
[46808] Old kernel metapackages are not removed during kernel upgrade

Fixed in 2.28.1 (17.2.5, 16.2.5, and 16.3.1)

After upgrade of kernel to the latest supported version, old kernel metapackages may remain on the cluster. The issue occurs if the system kernel line is changed from LTS to HWE. This setting is controlled by the upgrade_kernel_version parameter located in the ClusterRelease object under the deploy StateItem. As a result, the operating system has both LTS and HWE kernel packages installed and regularly updated, but only one kernel image is used (loaded into memory). The unused kernel images consume minimal amount of disk space.

Therefore, you can safely disregard the issue because it does not affect cluster operability. If you still require removing unused kernel metapackages, contact Mirantis support for detailed instructions.

[39437] Failure to replace a master node on a Container Cloud cluster

Fixed in 2.29.0 (17.4.0 and 16.4.0)

During the replacement of a master node on a cluster of any type, the process may get stuck with Kubelet's NodeReady condition is Unknown in the machine status on the remaining master nodes.

As a workaround, log in on the affected node and run the following command:

docker restart ucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>

StackLight
[47594] Patroni pods may get stuck in the CrashLoopBackOff state

Fixed in 2.28.3 (17.2.7, 16.2.7, and 16.3.3)

The Patroni pods may get stuck in the CrashLoopBackOff state due to the patroni container being terminated with reason: OOMKilled that you can see in the pod status. For example:

kubectl get pod/patroni-13-0 -n stacklight -o yaml
...
  - containerID: docker://<ID>`
    image: mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240828023010
    imageID: docker-pullable://mirantis.azurecr.io/stacklight/spilo@sha256:<ID>
    lastState:
      terminated:
        containerID: docker://<ID>
        exitCode: 137
        finishedAt: "2024-10-17T14:26:25Z"
        reason: OOMKilled
        startedAt: "2024-10-17T14:23:25Z"
    name: patroni
...

As a workaround, increase the memory limit for PostgreSQL to 20Gi in the Cluster object:

spec:
  providerSpec:
    value:
      helmReleases:
      - name: stacklight
        values:
          resources:
            postgresql:
              limits:
                memory: "20Gi"

For a detailed procedure of StackLight configuration, see MOSK Operations Guide: Configure StackLight. For description of the resources option, see MOSK Operations Guide: StackLight configuration parameters - Resource limits.

[47304] OpenSearch does not store kubelet logs

Fixed in 2.28.2 (17.2.6, 16.2.6, and 16.3.2)

Due to the JSON-based format of ucp-kubelet logs, OpenSearch does not store kubelet logs. Mirantis is working on the issue and will deliver the resolution in one of the nearest patch releases.

[44193] OpenSearch reaches 85% disk usage watermark affecting the cluster state

Fixed in 2.29.0 (17.4.0 and 16.4.0)

On High Availability (HA) clusters that use Local Volume Provisioner (LVP), Prometheus and OpenSearch from StackLight may share the same pool of storage. In such configuration, OpenSearch may approach the 85% disk usage watermark due to the combined storage allocation and usage patterns set by the Persistent Volume Claim (PVC) size parameters for Prometheus and OpenSearch, which consume storage the most.

When the 85% threshold is reached, the affected node is transitioned to the read-only state, preventing shard allocation and causing the OpenSearch cluster state to transition to Warning (Yellow) or Critical (Red).

Caution

The issue and the provided workaround apply only for clusters on which OpenSearch and Prometheus utilize the same storage pool.

To verify that the cluster is affected:

  1. Verify the result of the following formula:

    0.8 × OpenSearch_PVC_Size_GB + Prometheus_PVC_Size_GB > 0.85 × Total_Storage_Capacity_GB
    

    In the formula, define the following values:

    OpenSearch_PVC_Size_GB

    Derived from .values.elasticsearch.persistentVolumeUsableStorageSizeGB, defaulting to .values.elasticsearch.persistentVolumeClaimSize if unspecified. To obtain the OpenSearch PVC size:

    kubectl -n <namespaceName> get cluster <clusterName> -o yaml |\
    yq '.spec.providerSpec.value.helmReleases[] | select(.name == "stacklight") | .values.elasticsearch.persistentVolumeClaimSize '
    

    Example of system response:

    10000Gi
    
    Prometheus_PVC_Size_GB

    Sourced from .values.prometheusServer.persistentVolumeClaimSize. To obtain the Prometheus PVC size:

    kubectl -n <namespaceName> get cluster <clusterName> -o yaml |\
    yq '.spec.providerSpec.value.helmReleases[] | select(.name == "stacklight") | .values.prometheusServer.persistentVolumeClaimSize '
    

    Example of system response:

    4000Gi
    
    Total_Storage_Capacity_GB

    Total capacity of the OpenSearch PVCs. For LVP, the capacity of the storage pool. To obtain the total capacity:

    kubectl get pvc -n stacklight -l app=opensearch-master \
    -o custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage
    

    The system response contains multiple outputs, one per opensearch-master node. Select the capacity for the affected node.

    Note

    Convert the values to GB if they are set in different units.

    If the formula result is positive, it is an early indication that the cluster is affected.

  2. Verify whether the OpenSearchClusterStatusWarning or OpenSearchClusterStatusCritical alert is firing. And if so, verify the following:

    1. Log in to the OpenSearch web UI.

    2. In Management -> Dev Tools, run the following command:

      GET _cluster/allocation/explain
      

      The following system response indicates that the corresponding node is affected:

      "explanation": "the node is above the low watermark cluster setting \
      [cluster.routing.allocation.disk.watermark.low=85%], using more disk space \
      than the maximum allowed [85.0%], actual free: [xx.xxx%]"
      

      Note

      The system response may contain even higher watermark percent than 85.0%, depending on the case.

Workaround:

Warning

The workaround implies adjustement of the retention threshold for OpenSearch. And depending on the new threshold, some old logs will be deleted.

  1. Adjust or set .values.elasticsearch.persistentVolumeUsableStorageSizeGB to a lower value for the affection check formula to be non-positive. For configuration details, see MOSK Operations Guide: StackLight configuration parameters - OpenSearch.

    Mirantis also recommends reserving some space for other PVCs using storage from the pool. Use the following formula to calculate the required space:

    persistentVolumeUsableStorageSizeGB =
    0.84 × ((1 - Reserved_Percentage - Filesystem_Reserve) ×
    Total_Storage_Capacity_GB - Prometheus_PVC_Size_GB) /
    0.8
    

    In the formula, define the following values:

    Reserved_Percentage

    A user-defined variable that specifies what percentage of the total storage capacity should not be used by OpenSearch or Prometheus. This is used to reserve space for other components. It should be expressed as a decimal. For example, for 5% of reservation, Reserved_Percentage is 0.05. Mirantis recommends using 0.05 as a starting point.

    Filesystem_Reserve

    Percentage to deduct for filesystems that may reserve some portion of the available storage, which is marked as occupied. For example, for EXT4, it is 5% by default, so the value must be 0.05.

    Prometheus_PVC_Size_GB

    Sourced from .values.prometheusServer.persistentVolumeClaimSize.

    Total_Storage_Capacity_GB

    Total capacity of the OpenSearch PVCs. For LVP, the capacity of the storage pool. To obtain the total capacity:

    kubectl get pvc -n stacklight -l app=opensearch-master \
    -o custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage
    

    The system response contains multiple outputs, one per opensearch-master node. Select the capacity for the affected node.

    Note

    Convert the values to GB if they are set in different units.

    Calculation of above formula provides a maximum safe storage to allocate for .values.elasticsearch.persistentVolumeUsableStorageSizeGB. Use this formula as a reference for setting .values.elasticsearch.persistentVolumeUsableStorageSizeGB on a cluster.

  2. Wait up to 15-20 mins for OpenSearch to perform the cleaning.

  3. Verify that the cluster is not affected anymore using the procedure above.


Container Cloud web UI
[50181] Failure to deploy a compact cluster

A compact MOSK cluster fails to be deployed through the Container Cloud web UI due to inability to add any label to the control plane machines along with inability to change dedicatedControlPlane: false using the web UI.

To work around the issue, manually add the required labels using CLI. Once done, the cluster deployment resumes.

[50168] Inability to use a new project right after creation

A newly created project does not display all available tabs in the Container Cloud web UI and contains different access denied errors during first five minutes after creation.

To work around the issue, refresh the browser in five minutes after the project creation.

Components versions

The following table lists the major components and their versions delivered in Container Cloud 2.28.0. The components that are newly added, updated, deprecated, or removed as compared to 2.27.0, are marked with a corresponding superscript, for example, admission-controller Updated.

Component

Application/Service

Version

Bare metal

baremetal-dnsmasq Updated

base-2-28-alpine-20240906160120

baremetal-operator Updated

base-2-28-alpine-20240910093836

baremetal-provider Updated

1.41.14

bm-collective Updated

base-2-28-alpine-20240910093747

cluster-api-provider-baremetal Updated

1.41.14

ironic Updated

antelope-jammy-20240716113922

ironic-inspector Updated

antelope-jammy-20240716113922

ironic-prometheus-exporter

0.1-20240819102310

kaas-ipam Updated

base-2-28-alpine-20240910095249

kubernetes-entrypoint

v1.0.1-4e381cb-20240813170642

mariadb

10.6.17-focal-20240523075821

metallb-controller Updated

v0.14.5-ed177720-amd64

metallb-speaker Updated

v0.14.5-ed177720-amd64

syslog-ng Updated

base-alpine-20240906155734

Container Cloud

admission-controller Updated

1.41.14

agent-controller Updated

1.41.14

byo-cluster-api-controller Updated

1.41.14

byo-credentials-controller Removed

n/a

ceph-kcc-controller Updated

1.41.14

cert-manager-controller Updated

1.11.0-8

configuration-collector Updated

1.41.14

event-controller Updated

1.41.14

frontend Updated

1.41.14

golang

1.22.7

iam-controller Updated

1.41.14

kaas-exporter Updated

1.41.14

kproxy Updated

1.41.14

lcm-controller Updated

1.41.14

license-controller Updated

1.41.14

machinepool-controller Updated

1.41.14

mcc-haproxy Updated

0.26.0-95-g95f0130

nginx Updated

1.41.14

portforward-controller Updated

1.41.14

proxy-controller Updated

1.41.14

rbac-controller Updated

1.41.14

registry Updated

2.8.1-13

release-controller Updated

1.41.14

rhellicense-controller Removed

n/a

scope-controller Updated

1.41.14

secret-controller Updated

1.41.14

storage-discovery Updated

1.41.14

user-controller Updated

1.41.14

IAM Updated

iam

1.41.14

mariadb

10.6.17-focal-20240909113408

mcc-keycloak Updated

25.0.6-20240926140203

OpenStack Updated

host-os-modules-controller

1.41.14

openstack-cluster-api-controller

1.41.14

openstack-provider

1.41.14

os-credentials-controller Removed

n/a

Artifacts

This section lists the artifacts of components included in the Container Cloud release 2.27.0. The components that are newly added, updated, deprecated, or removed as compared to 2.27.0, are marked with a corresponding superscript, for example, admission-controller Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries Updated

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20240911112529

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20240911112529

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.41.14.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.41.14.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.41.14.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.41.14.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.41.14.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.41.14.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.41.14

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-28-alpine-20240906160120

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-28-alpine-20240910093836

bm-collective Updated

mirantis.azurecr.io/bm/bm-collective:base-2-28-alpine-20240910093747

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.41.14

ironic Updated

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:antelope-jammy-20240716113922

ironic-prometheus-exporter Updated

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240819102310

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-2-28-alpine-20240910095249

kubernetes-entrypoint Updated

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-4e381cb-20240813170642

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240523075821

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.26.0-95-g95f0130

metallb-controller Updated

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-ed177720-amd64

metallb-speaker Updated

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-ed177720-amd64

syslog-ng Updated

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20240906155734

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.41.14.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.41.14.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.41.14.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.41.14.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.41.14.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.41.14.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.41.14.tgz

credentials-controller New

https://binary.mirantis.com/core/helm/credentials-controller-1.41.14.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.41.14.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.41.14.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.41.14.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.41.14.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.41.14.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.41.14.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.41.14.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.41.14.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.41.14.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.41.14.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.41.14.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.41.14.tgz

os-credentials-controller Removed

n/a

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.41.14.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.41.14.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.41.14.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.41.14.tgz

secret-controller

https://binary.mirantis.com/core/helm/secret-controller-1.41.14.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.41.14.tgz

Docker images Updated

admission-controller

mirantis.azurecr.io/core/admission-controller:1.41.14

agent-controller

mirantis.azurecr.io/core/agent-controller:1.41.14

ceph-kcc-controller

mirantis.azurecr.io/core/ceph-kcc-controller:1.41.14

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-8

configuration-collector

mirantis.azurecr.io/core/configuration-collector:1.41.14

credentials-controller New

mirantis.azurecr.io/core/credentials-controller:1.41.14

event-controller

mirantis.azurecr.io/core/event-controller:1.41.14

frontend

mirantis.azurecr.io/core/frontend:1.41.14

host-os-modules-controller

mirantis.azurecr.io/core/host-os-modules-controller:1.41.14

iam-controller

mirantis.azurecr.io/core/iam-controller:1.41.14

kaas-exporter

mirantis.azurecr.io/core/kaas-exporter:1.41.14

kproxy

mirantis.azurecr.io/core/kproxy:1.41.14

lcm-controller

mirantis.azurecr.io/core/lcm-controller:1.41.14

license-controller

mirantis.azurecr.io/core/license-controller:1.41.14

machinepool-controller

mirantis.azurecr.io/core/machinepool-controller:1.41.14

mcc-cache-warmup

mirantis.azurecr.io/core/mcc-cache-warmup:1.41.14

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.26.0-95-g95f0130

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.26.0-95-g95f0130

nginx

mirantis.azurecr.io/core/external/nginx:1.41.14

openstack-cluster-api-controller

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.41.14

os-credentials-controller Removed

n/a

portforward-controller

mirantis.azurecr.io/core/portforward-controller:1.41.14

rbac-controller

mirantis.azurecr.io/core/rbac-controller:1.41.14

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-13

release-controller

mirantis.azurecr.io/core/release-controller:1.41.14

scope-controller

mirantis.azurecr.io/core/scope-controller:1.41.14

secret-controller

mirantis.azurecr.io/core/secret-controller:1.41.14

user-controller

mirantis.azurecr.io/core/user-controller:1.41.14

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/core/helm/iam-1.41.14.tgz

Docker images

kubectl Updated

mirantis.azurecr.io/general/kubectl:20240926142019

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-ba8ada4-20240405150338

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240909113408

mcc-keycloak Updated

mirantis.azurecr.io/iam/mcc-keycloak:25.0.6-20240926140203

Security notes

In total, since Container Cloud 2.27.0, in 2.28.0, 2614 Common Vulnerabilities and Exposures (CVE) have been fixed: 299 of critical and 2315 of high severity.

The table below includes the total numbers of addressed unique and common vulnerabilities and exposures (CVE) by product component since the 2.27.4 patch release. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

0

5

5

Common

0

211

211

KaaS core

Unique

4

11

15

Common

10

315

325

StackLight

Unique

1

7

8

Common

1

25

26

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.3: Security notes.

Update notes

This section describes the specific actions you as a cloud operator need to complete before or after your Container Cloud cluster update to the Cluster releases 17.3.0 or 16.3.0.

Consider this information as a supplement to the generic update procedures published in Operations Guide: Automatic upgrade of a management cluster and Update a managed cluster.

Pre-update actions
Change label values in Ceph metrics used in customizations

Note

If you do not use Ceph metrics in any customizations, for example, custom alerts, Grafana dashboards, or queries in custom workloads, skip this section.

After deprecating the performance metric exporter that is integrated into the Ceph Manager daemon for the sake of the dedicated Ceph Exporter daemon in Container Cloud 2.27.0, you may need to update values of several labels in Ceph metrics if you use them in any customizations such as custom alerts, Grafana dashboards, or queries in custom tools. These labels are changed in Container Cloud 2.28.0 (Cluster releases 16.3.0 and 17.3.0).

Note

Names of metrics are changed, no metrics are removed.

All Ceph metrics to be collected by the Ceph Exporter daemon changed their labels job and instance due to scraping metrics from new Ceph Exporter daemon instead of the performance metric exporter of Ceph Manager:

  • Values of the job labels are changed from rook-ceph-mgr to prometheus-rook-exporter for all Ceph metrics moved to Ceph Exporter. The full list of moved metrics is presented below.

  • Values of the instance labels are changed from the metric endpoint of Ceph Manager with port 9283 to the metric endpoint of Ceph Exporter with port 9926 for all Ceph metrics moved to Ceph Exporter. The full list of moved metrics is presented below.

  • Values of the instance_id labels of Ceph metrics from the RADOS Gateway (RGW) daemons are changed from the daemon GID to the daemon subname. For example, instead of instance_id="<RGW_PROCESS_GID>", the instance_id="a" (ceph_rgw_qlen{instance_id="a"}) is now used. The list of moved Ceph RGW metrics is presented below.

List of affected Ceph RGW metrics
  • ceph_rgw_cache_.*

  • ceph_rgw_failed_req

  • ceph_rgw_gc_retire_object

  • ceph_rgw_get.*

  • ceph_rgw_keystone_.*

  • ceph_rgw_lc_.*

  • ceph_rgw_lua_.*

  • ceph_rgw_pubsub_.*

  • ceph_rgw_put.*

  • ceph_rgw_qactive

  • ceph_rgw_qlen

  • ceph_rgw_req

List of all metrics to be collected by Ceph Exporter instead of Ceph Manager
  • ceph_bluefs_.*

  • ceph_bluestore_.*

  • ceph_mds_cache_.*

  • ceph_mds_caps

  • ceph_mds_ceph_.*

  • ceph_mds_dir_.*

  • ceph_mds_exported_inodes

  • ceph_mds_forward

  • ceph_mds_handle_.*

  • ceph_mds_imported_inodes

  • ceph_mds_inodes.*

  • ceph_mds_load_cent

  • ceph_mds_log_.*

  • ceph_mds_mem_.*

  • ceph_mds_openino_dir_fetch

  • ceph_mds_process_request_cap_release

  • ceph_mds_reply_.*

  • ceph_mds_request

  • ceph_mds_root_.*

  • ceph_mds_server_.*

  • ceph_mds_sessions_.*

  • ceph_mds_slow_reply

  • ceph_mds_subtrees

  • ceph_mon_election_.*

  • ceph_mon_num_.*

  • ceph_mon_session_.*

  • ceph_objecter_.*

  • ceph_osd_numpg.*

  • ceph_osd_op.*

  • ceph_osd_recovery_.*

  • ceph_osd_stat_.*

  • ceph_paxos.*

  • ceph_prioritycache.*

  • ceph_purge.*

  • ceph_rgw_cache_.*

  • ceph_rgw_failed_req

  • ceph_rgw_gc_retire_object

  • ceph_rgw_get.*

  • ceph_rgw_keystone_.*

  • ceph_rgw_lc_.*

  • ceph_rgw_lua_.*

  • ceph_rgw_pubsub_.*

  • ceph_rgw_put.*

  • ceph_rgw_qactive

  • ceph_rgw_qlen

  • ceph_rgw_req

  • ceph_rocksdb_.*

Post-update actions
Manually disable collection of performance metrics by Ceph Manager (optional)

Since Container Cloud 2.28.0 (Cluster releases 17.3.0 and 16.3.0), Ceph cluster metrics are collected by the dedicated Ceph Exporter daemon. At the same time, same metrics are still available to be collected by the Ceph Manager daemon. To improve performance of the Ceph Manager daemon, you can manually disable collection of performance metrics by Ceph Manager, which are collected by the Ceph Exporter daemon.

To disable performance metrics for the Ceph Manager daemon, add the following parameter to the KaaSCephCluster spec in the rookConfig section:

spec:
  cephClusterSpec:
    rookConfig:
      "mgr|mgr/prometheus/exclude_perf_counters": "true"

Once you add this option, Ceph performance metrics are collected by the Ceph Exporter daemon only. For more details, see Official Ceph documentation.

Upgrade to Ubuntu 22.04 on baremetal-based clusters

In Container Cloud 2.29.0, the Cluster release update of the Ubuntu 20.04-based managed clusters will become impossible, and Ubuntu 22.04 will become the only supported version of the operating system. Therefore, ensure that every node of your managed clusters are running Ubuntu 22.04 to unblock managed cluster update in Container Cloud 2.29.0.

Warning

Management cluster update to Container Cloud 2.29.1 will be blocked if at least one node of any related managed cluster is running Ubuntu 20.04.

Therefore, if your existing cluster runs nodes on Ubuntu 20.04, prevent blocking of your cluster update by upgrading all cluster nodes to Ubuntu 22.04 during the course of the Container Cloud 2.28.x series. For the update procedure, refer to Mirantis OpenStack for Kubernetes documentation: Bare metal operations - Upgrade an operating system distribution.

It is not mandatory to upgrade all machines at once. You can upgrade them one by one or in small batches, for example, if the maintenance window is limited in time.

Note

Existing management clusters were automatically updated to Ubuntu 22.04 during cluster upgrade to the Cluster release 16.2.0 in Container Cloud 2.27.0. Greenfield deployments of management clusters are also based on Ubuntu 22.04.

Warning

Usage of third-party software, which is not part of Mirantis-supported configurations, for example, the use of custom DPDK modules, may block upgrade of an operating system distribution. Users are fully responsible for ensuring the compatibility of such custom components with the latest supported Ubuntu version.

2.27.4

Note

For MOSK clusters, Container Cloud 2.27.4 is the second patch release of MOSK 24.2.x series using the patch Cluster release 17.2.4. For the update path of 24.1 and 24.2 series, see MOSK documentation: Cluster update scheme.

The Container Cloud patch release 2.27.4, which is based on the 2.27.0 major release, provides the following updates:

  • Support for the patch Cluster releases 16.2.4 and 17.2.4 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 24.2.2.

  • Bare metal: update of Ubuntu mirror from ubuntu-2024-08-06-014502 to ubuntu-2024-08-21-014714 along with update of the minor kernel version from 5.15.0-117-generic to 5.15.0-119-generic for Jammy and to 5.15.0-118-generic for Focal.

  • Security fixes for CVEs in images.

This patch release also supports the latest major Cluster releases 17.2.0 and 16.2.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster release instead.

For main deliverables of the parent Container Cloud release of 2.27.4, refer to 2.27.0.

Security notes

In total, since Container Cloud 2.27.3, 131 Common Vulnerabilities and Exposures (CVE) have been fixed in 2.27.4: 15 of critical and 116 of high severity.

The table below includes the total numbers of addressed unique and common CVEs in images by product component since Container Cloud 2.27.3. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

0

1

1

Common

0

3

3

Kaas core

Unique

3

19

22

Common

14

105

119

StackLight

Unique

1

8

9

Common

1

8

9

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.2.2: Security notes.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.27.4 including the Cluster releases 16.2.4 and 17.2.4.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[47202] Inspection error on bare metal hosts after dnsmasq restart

Note

Moving forward, the workaround for this issue will be moved from Release Notes to MOSK Troubleshooting Guide: Inspection error on bare metal hosts after dnsmasq restart.

If the dnsmasq pod is restarted during the bootstrap of newly added nodes, those nodes may fail to undergo inspection. That can result in inspection error in the corresponding BareMetalHost objects.

The issue can occur when:

  • The dnsmasq pod was moved to another node.

  • DHCP subnets were changed, including addition or removal. In this case, the dhcpd container of the dnsmasq pod is restarted.

    Caution

    If changing or adding of DHCP subnets is required to bootstrap new nodes, wait after changing or adding DHCP subnets until the dnsmasq pod becomes ready, then create BareMetalHost objects.

To verify whether the nodes are affected:

  1. Verify whether the BareMetalHost objects contain the inspection error:

    kubectl get bmh -n <managed-cluster-namespace-name>
    

    Example of system response:

    NAME            STATE         CONSUMER        ONLINE   ERROR              AGE
    test-master-1   provisioned   test-master-1   true                        9d
    test-master-2   provisioned   test-master-2   true                        9d
    test-master-3   provisioned   test-master-3   true                        9d
    test-worker-1   provisioned   test-worker-1   true                        9d
    test-worker-2   provisioned   test-worker-2   true                        9d
    test-worker-3   inspecting                    true     inspection error   19h
    
  2. Verify whether the dnsmasq pod was in Ready state when the inspection of the affected baremetal hosts (test-worker-3 in the example above) was started:

    kubectl -n kaas get pod <dnsmasq-pod-name> -oyaml
    

    Example of system response:

    ...
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: Initialized
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: Ready
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: ContainersReady
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: PodScheduled
      containerStatuses:
      - containerID: containerd://6dbcf2fc4b36ce4c549c9191ab01f72d0236c51d42947675302675e4bfaf4cdf
        image: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq:base-2-28-alpine-20240812132650
        imageID: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq@sha256:3dad3e278add18e69b2608e462691c4823942641a0f0e25e6811e703e3c23b3b
        lastState:
          terminated:
            containerID: containerd://816fcf079cd544acd74e312065de5b5ed4dbf1dc6159fefffff4f644b5e45987
            exitCode: 0
            finishedAt: "2024-10-11T07:38:35Z"
            reason: Completed
            startedAt: "2024-10-10T15:37:45Z"
        name: dhcpd
        ready: true
        restartCount: 2
        started: true
        state:
          running:
            startedAt: "2024-10-11T07:38:37Z"
      ...
    

    In the system response above, the dhcpd container was not ready between "2024-10-11T07:38:35Z" and "2024-10-11T07:38:54Z".

  3. Verify the affected baremetal host. For example:

    kubectl get bmh -n managed-ns test-worker-3 -oyaml
    

    Example of system response:

    ...
    status:
      errorCount: 15
      errorMessage: Introspection timeout
      errorType: inspection error
      ...
      operationHistory:
        deprovision:
          end: null
          start: null
        inspect:
          end: null
          start: "2024-10-11T07:38:19Z"
        provision:
          end: null
          start: null
        register:
          end: "2024-10-11T07:38:19Z"
          start: "2024-10-11T07:37:25Z"
    

    In the system response above, inspection was started at "2024-10-11T07:38:19Z", immediately before the period of the dhcpd container downtime. Therefore, this node is most likely affected by the issue.

Workaround

  1. Reboot the node using the IPMI reset or cycle command.

  2. If the node fails to boot, remove the failed BareMetalHost object and create it again:

    1. Remove BareMetalHost object. For example:

      kubectl delete bmh -n managed-ns test-worker-3
      
    2. Verify that the BareMetalHost object is removed:

      kubectl get bmh -n managed-ns test-worker-3
      
    3. Create a BareMetalHost object from the template. For example:

      kubectl create -f bmhc-test-worker-3.yaml
      kubectl create -f bmh-test-worker-3.yaml
      
[46245] Lack of access permissions for HOC and HOCM objects

Fixed in 2.28.0 (17.3.0 and 16.3.0)

When trying to list the HostOSConfigurationModules and HostOSConfiguration custom resources, serviceuser or a user with the global-admin or operator role obtains the access denied error. For example:

kubectl --kubeconfig ~/.kube/mgmt-config get hocm

Error from server (Forbidden): hostosconfigurationmodules.kaas.mirantis.com is forbidden:
User "2d74348b-5669-4c65-af31-6c05dbedac5f" cannot list resource "hostosconfigurationmodules"
in API group "kaas.mirantis.com" at the cluster scope: access denied

Workaround:

  1. Modify the global-admin role by adding a new entry with the following contents to the rules list:

    kubectl edit clusterroles kaas-global-admin
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurationmodules]
      verbs: ['*']
    
  2. For each Container Cloud project, modify the kaas-operator role by adding a new entry with the following contents to the rules list:

    kubectl -n <projectName> edit roles kaas-operator
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurations]
      verbs: ['*']
    
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[41305] DHCP responses are lost between dnsmasq and dhcp-relay pods

Fixed in 2.28.0 (17.3.0 and 16.3.0)

After node maintenance of a management cluster, the newly added nodes may fail to undergo provisioning successfully. The issue relates to new nodes that are in the same L2 domain as the management cluster.

The issue was observed on environments having management cluster nodes configured with a single L2 segment used for all network traffic (PXE and LCM/management networks).

To verify whether the cluster is affected:

Verify whether the dnsmasq and dhcp-relay pods run on the same node in the management cluster:

kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"

Example of system response:

dhcp-relay-7d85f75f76-5vdw2   2/2   Running   2 (36h ago)   36h   10.10.0.122     kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (36h ago)   36h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>

If this is the case, proceed to the workaround below.

Workaround:

  1. Log in to a node that contains kubeconfig of the affected management cluster.

  2. Make sure that at least two management cluster nodes are schedulable:

    kubectl get node
    

    Example of a positive system response:

    NAME                                             STATUS   ROLES    AGE   VERSION
    kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-ad5a6f51-b98f-43c3-91d5-55fed3d0ff21   Ready    master   37h   v1.27.10-mirantis-1
    
  3. Delete the dhcp-relay pod:

    kubectl -n kaas delete pod <dhcp-relay-xxxxx>
    
  4. Verify that the dnsmasq and dhcp-relay pods are scheduled into different nodes:

    kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"
    

    Example of a positive system response:

    dhcp-relay-7d85f75f76-rkv03   2/2   Running   0             49s   10.10.0.121     kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   <none>   <none>
    dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (37h ago)   37h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


LCM
[39437] Failure to replace a master node on a Container Cloud cluster

Fixed in 2.29.0 (17.4.0 and 16.4.0)

During the replacement of a master node on a cluster of any type, the process may get stuck with Kubelet's NodeReady condition is Unknown in the machine status on the remaining master nodes.

As a workaround, log in on the affected node and run the following command:

docker restart ucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>

Ceph
[50566] Ceph upgrade is very slow during patch or major cluster update

Due to the upstream Ceph issue 66717, during CVE upgrade of the Ceph daemon image of Ceph Reef 18.2.4, OSDs may start slow and even fail the starting probe with the following describe output in the rook-ceph-osd-X pod:

 Warning  Unhealthy  57s (x16 over 3m27s)  kubelet  Startup probe failed:
 ceph daemon health check failed with the following output:
> no valid command found; 10 closest matches:
> 0
> 1
> 2
> abort
> assert
> bluefs debug_inject_read_zeros
> bluefs files list
> bluefs stats
> bluestore bluefs device info [<alloc_size:int>]
> config diff
> admin_socket: invalid command

Workaround:

Complete the following steps during every patch or major cluster update of the Cluster releases 17.2.x, 17.3.x, and 17.4.x (until Ceph 18.2.5 becomes supported):

  1. Plan extra time in the maintenance window for the patch cluster update.

    Slow starts will still impact the update procedure, but after completing the following step, the recovery process noticeably shortens without affecting the overall cluster state and data responsiveness.

  2. Select one of the following options:

    • Before the cluster update, set the noout flag:

      ceph osd set noout
      

      Once the Ceph OSDs image upgrade is done, unset the flag:

      ceph osd unset noout
      
    • Monitor the Ceph OSDs image upgrade. If the symptoms of slow start appear, set the noout flag as soon as possible. Once the Ceph OSDs image upgrade is done, unset the flag.

[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.

Container Cloud web UI
[50181] Failure to deploy a compact cluster

A compact MOSK cluster fails to be deployed through the Container Cloud web UI due to inability to add any label to the control plane machines along with inability to change dedicatedControlPlane: false using the web UI.

To work around the issue, manually add the required labels using CLI. Once done, the cluster deployment resumes.

[50168] Inability to use a new project right after creation

A newly created project does not display all available tabs in the Container Cloud web UI and contains different access denied errors during first five minutes after creation.

To work around the issue, refresh the browser in five minutes after the project creation.

Patch cluster update
[49713] Patch update is stuck with some nodes in Prepare state

Patch update from 2.27.3 to 2.27.4 may get stuck with one or more management cluster nodes remaining in the Prepare state and with the following error in the lcm-controller logs on the management cluster:

failed to create cluster updater for cluster default/kaas-mgmt:
machine update group not found for machine default/master-0

To work around the issue, in the LCMMachine objects of the management cluster, set the following annotation:

lcm.mirantis.com/update-group: <mgmt cluster name>-controlplane

Once done, patch update of the cluster resumes automatically.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.27.4. For artifacts of the Cluster releases introduced in 2.27.4, see patch Cluster releases 16.2.4 and 17.2.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries Updated

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20240821131059

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20240821131059

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.40.23.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.40.23.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.40.23.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.40.23.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.40.23.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.40.23.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.40.23

baremetal-dnsmasq

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-27-alpine-20240806125028

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-27-alpine-20240827132225

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-2-27-alpine-20240812135414

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.40.23

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:antelope-jammy-20240716113922

ironic-prometheus-exporter Updated

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240819102310

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-2-27-alpine-20240812140336

kubernetes-entrypoint Updated

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-4e381cb-20240813170642

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240523075821

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.25.0-42-g8710cbe

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-dfbd1a68-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-dfbd1a68-amd64

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20240806124545

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.40.23.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.40.23.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.40.23.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.40.23.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.40.23.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.40.23.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.40.23.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.40.23.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.40.23.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.40.23.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.40.23.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.40.23.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.40.23.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.40.23.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.40.23.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.40.23.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.40.23.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.40.23.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.40.23.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.40.23.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.40.23.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.40.23.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.40.23.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.40.23.tgz

secret-controller

https://binary.mirantis.com/core/helm/secret-controller-1.40.23.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.40.23.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.40.23

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.40.23

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.40.23

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.40.23

cert-manager-controller Updated

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-7

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.40.23

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.40.23

frontend Updated

mirantis.azurecr.io/core/frontend:1.40.23

host-os-modules-controller Updated

mirantis.azurecr.io/core/host-os-modules-controller:1.40.23

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.40.23

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.40.23

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.40.23

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.40.23

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.40.23

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.40.23

mcc-cache-warmup Updated

mirantis.azurecr.io/core/mcc-cache-warmup:1.40.23

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.25.0-42-g8710cbe

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.25.0-42-g8710cbe

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.40.23

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.40.23

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.40.23

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.40.23

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.40.23

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-11

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.40.23

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.40.23

secret-controller Updated

mirantis.azurecr.io/core/secret-controller:1.40.23

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.40.23

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/core/helm/iam-1.40.23.tgz

Docker images

kubectl

mirantis.azurecr.io/general/kubectl:20240711152257

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240523075821

mcc-keycloak

mirantis.azurecr.io/iam/mcc-keycloak:24.0.5-20240802071408

2.27.3

Important

For MOSK clusters, Container Cloud 2.27.3 is the first patch release of MOSK 24.2.x series using the patch Cluster release 17.2.3. For the update path of 24.1 and 24.2 series, see MOSK documentation: Cluster update scheme.

The Container Cloud patch release 2.27.3, which is based on the 2.27.0 major release, provides the following updates:

  • Support for the patch Cluster releases 16.2.3 and 17.2.3 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 24.2.1.

  • MKE:

    • Support for MKE 3.7.12.

    • Improvements in the MKE benchmark compliance (control ID 5.1.5): analyzed and fixed the majority of failed compliance checks for the following components:

      • Container Cloud: iam-keycloak in the kaas namespace and opensearch-dashboards in the stacklight namespace

      • MOSK: opensearch-dashboards in the stacklight namespace

  • Bare metal: update of Ubuntu mirror from ubuntu-2024-07-16-014744 to ubuntu-2024-08-06-014502 along with update of the minor kernel version from 5.15.0-116-generic to 5.15.0-117-generic.

  • VMware vSphere: suspension of support for cluster deployment, update, and attachment. For details, see Deprecation notes.

  • Security fixes for CVEs in images.

This patch release also supports the latest major Cluster releases 17.2.0 and 16.2.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster release instead.

For main deliverables of the parent Container Cloud release of 2.27.3, refer to 2.27.0.

Security notes

In total, since Container Cloud 2.27.2, 1559 Common Vulnerabilities and Exposures (CVE) have been fixed in 2.27.3: 253 of critical and 1306 of high severity.

The table below includes the total numbers of addressed unique and common CVEs in images by product component since Container Cloud 2.27.2. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

3

14

17

Common

142

736

878

Kaas core

Unique

4

22

26

Common

99

448

547

StackLight

Unique

7

51

58

Common

12

122

134

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.2.1: Security notes.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.27.3 including the Cluster releases 16.2.3 and 17.2.3.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[47202] Inspection error on bare metal hosts after dnsmasq restart

Note

Moving forward, the workaround for this issue will be moved from Release Notes to MOSK Troubleshooting Guide: Inspection error on bare metal hosts after dnsmasq restart.

If the dnsmasq pod is restarted during the bootstrap of newly added nodes, those nodes may fail to undergo inspection. That can result in inspection error in the corresponding BareMetalHost objects.

The issue can occur when:

  • The dnsmasq pod was moved to another node.

  • DHCP subnets were changed, including addition or removal. In this case, the dhcpd container of the dnsmasq pod is restarted.

    Caution

    If changing or adding of DHCP subnets is required to bootstrap new nodes, wait after changing or adding DHCP subnets until the dnsmasq pod becomes ready, then create BareMetalHost objects.

To verify whether the nodes are affected:

  1. Verify whether the BareMetalHost objects contain the inspection error:

    kubectl get bmh -n <managed-cluster-namespace-name>
    

    Example of system response:

    NAME            STATE         CONSUMER        ONLINE   ERROR              AGE
    test-master-1   provisioned   test-master-1   true                        9d
    test-master-2   provisioned   test-master-2   true                        9d
    test-master-3   provisioned   test-master-3   true                        9d
    test-worker-1   provisioned   test-worker-1   true                        9d
    test-worker-2   provisioned   test-worker-2   true                        9d
    test-worker-3   inspecting                    true     inspection error   19h
    
  2. Verify whether the dnsmasq pod was in Ready state when the inspection of the affected baremetal hosts (test-worker-3 in the example above) was started:

    kubectl -n kaas get pod <dnsmasq-pod-name> -oyaml
    

    Example of system response:

    ...
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: Initialized
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: Ready
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: ContainersReady
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: PodScheduled
      containerStatuses:
      - containerID: containerd://6dbcf2fc4b36ce4c549c9191ab01f72d0236c51d42947675302675e4bfaf4cdf
        image: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq:base-2-28-alpine-20240812132650
        imageID: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq@sha256:3dad3e278add18e69b2608e462691c4823942641a0f0e25e6811e703e3c23b3b
        lastState:
          terminated:
            containerID: containerd://816fcf079cd544acd74e312065de5b5ed4dbf1dc6159fefffff4f644b5e45987
            exitCode: 0
            finishedAt: "2024-10-11T07:38:35Z"
            reason: Completed
            startedAt: "2024-10-10T15:37:45Z"
        name: dhcpd
        ready: true
        restartCount: 2
        started: true
        state:
          running:
            startedAt: "2024-10-11T07:38:37Z"
      ...
    

    In the system response above, the dhcpd container was not ready between "2024-10-11T07:38:35Z" and "2024-10-11T07:38:54Z".

  3. Verify the affected baremetal host. For example:

    kubectl get bmh -n managed-ns test-worker-3 -oyaml
    

    Example of system response:

    ...
    status:
      errorCount: 15
      errorMessage: Introspection timeout
      errorType: inspection error
      ...
      operationHistory:
        deprovision:
          end: null
          start: null
        inspect:
          end: null
          start: "2024-10-11T07:38:19Z"
        provision:
          end: null
          start: null
        register:
          end: "2024-10-11T07:38:19Z"
          start: "2024-10-11T07:37:25Z"
    

    In the system response above, inspection was started at "2024-10-11T07:38:19Z", immediately before the period of the dhcpd container downtime. Therefore, this node is most likely affected by the issue.

Workaround

  1. Reboot the node using the IPMI reset or cycle command.

  2. If the node fails to boot, remove the failed BareMetalHost object and create it again:

    1. Remove BareMetalHost object. For example:

      kubectl delete bmh -n managed-ns test-worker-3
      
    2. Verify that the BareMetalHost object is removed:

      kubectl get bmh -n managed-ns test-worker-3
      
    3. Create a BareMetalHost object from the template. For example:

      kubectl create -f bmhc-test-worker-3.yaml
      kubectl create -f bmh-test-worker-3.yaml
      
[46245] Lack of access permissions for HOC and HOCM objects

Fixed in 2.28.0 (17.3.0 and 16.3.0)

When trying to list the HostOSConfigurationModules and HostOSConfiguration custom resources, serviceuser or a user with the global-admin or operator role obtains the access denied error. For example:

kubectl --kubeconfig ~/.kube/mgmt-config get hocm

Error from server (Forbidden): hostosconfigurationmodules.kaas.mirantis.com is forbidden:
User "2d74348b-5669-4c65-af31-6c05dbedac5f" cannot list resource "hostosconfigurationmodules"
in API group "kaas.mirantis.com" at the cluster scope: access denied

Workaround:

  1. Modify the global-admin role by adding a new entry with the following contents to the rules list:

    kubectl edit clusterroles kaas-global-admin
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurationmodules]
      verbs: ['*']
    
  2. For each Container Cloud project, modify the kaas-operator role by adding a new entry with the following contents to the rules list:

    kubectl -n <projectName> edit roles kaas-operator
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurations]
      verbs: ['*']
    
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[41305] DHCP responses are lost between dnsmasq and dhcp-relay pods

Fixed in 2.28.0 (17.3.0 and 16.3.0)

After node maintenance of a management cluster, the newly added nodes may fail to undergo provisioning successfully. The issue relates to new nodes that are in the same L2 domain as the management cluster.

The issue was observed on environments having management cluster nodes configured with a single L2 segment used for all network traffic (PXE and LCM/management networks).

To verify whether the cluster is affected:

Verify whether the dnsmasq and dhcp-relay pods run on the same node in the management cluster:

kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"

Example of system response:

dhcp-relay-7d85f75f76-5vdw2   2/2   Running   2 (36h ago)   36h   10.10.0.122     kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (36h ago)   36h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>

If this is the case, proceed to the workaround below.

Workaround:

  1. Log in to a node that contains kubeconfig of the affected management cluster.

  2. Make sure that at least two management cluster nodes are schedulable:

    kubectl get node
    

    Example of a positive system response:

    NAME                                             STATUS   ROLES    AGE   VERSION
    kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-ad5a6f51-b98f-43c3-91d5-55fed3d0ff21   Ready    master   37h   v1.27.10-mirantis-1
    
  3. Delete the dhcp-relay pod:

    kubectl -n kaas delete pod <dhcp-relay-xxxxx>
    
  4. Verify that the dnsmasq and dhcp-relay pods are scheduled into different nodes:

    kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"
    

    Example of a positive system response:

    dhcp-relay-7d85f75f76-rkv03   2/2   Running   0             49s   10.10.0.121     kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   <none>   <none>
    dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (37h ago)   37h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


LCM
[39437] Failure to replace a master node on a Container Cloud cluster

Fixed in 2.29.0 (17.4.0 and 16.4.0)

During the replacement of a master node on a cluster of any type, the process may get stuck with Kubelet's NodeReady condition is Unknown in the machine status on the remaining master nodes.

As a workaround, log in on the affected node and run the following command:

docker restart ucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>

Ceph
[50566] Ceph upgrade is very slow during patch or major cluster update

Due to the upstream Ceph issue 66717, during CVE upgrade of the Ceph daemon image of Ceph Reef 18.2.4, OSDs may start slow and even fail the starting probe with the following describe output in the rook-ceph-osd-X pod:

 Warning  Unhealthy  57s (x16 over 3m27s)  kubelet  Startup probe failed:
 ceph daemon health check failed with the following output:
> no valid command found; 10 closest matches:
> 0
> 1
> 2
> abort
> assert
> bluefs debug_inject_read_zeros
> bluefs files list
> bluefs stats
> bluestore bluefs device info [<alloc_size:int>]
> config diff
> admin_socket: invalid command

Workaround:

Complete the following steps during every patch or major cluster update of the Cluster releases 17.2.x, 17.3.x, and 17.4.x (until Ceph 18.2.5 becomes supported):

  1. Plan extra time in the maintenance window for the patch cluster update.

    Slow starts will still impact the update procedure, but after completing the following step, the recovery process noticeably shortens without affecting the overall cluster state and data responsiveness.

  2. Select one of the following options:

    • Before the cluster update, set the noout flag:

      ceph osd set noout
      

      Once the Ceph OSDs image upgrade is done, unset the flag:

      ceph osd unset noout
      
    • Monitor the Ceph OSDs image upgrade. If the symptoms of slow start appear, set the noout flag as soon as possible. Once the Ceph OSDs image upgrade is done, unset the flag.

[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.

Container Cloud web UI
[50181] Failure to deploy a compact cluster

A compact MOSK cluster fails to be deployed through the Container Cloud web UI due to inability to add any label to the control plane machines along with inability to change dedicatedControlPlane: false using the web UI.

To work around the issue, manually add the required labels using CLI. Once done, the cluster deployment resumes.

[50168] Inability to use a new project right after creation

A newly created project does not display all available tabs in the Container Cloud web UI and contains different access denied errors during first five minutes after creation.

To work around the issue, refresh the browser in five minutes after the project creation.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.27.3. For artifacts of the Cluster releases introduced in 2.27.3, see patch Cluster releases 16.2.3 and 17.2.3.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20240716085444

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20240716085444

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.40.21.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.40.21.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.40.21.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.40.21.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.40.21.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.40.21.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.40.21

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-27-alpine-20240806125028

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-27-alpine-20240812133205

bm-collective Updated

mirantis.azurecr.io/bm/bm-collective:base-2-27-alpine-20240812135414

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.40.21

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:antelope-jammy-20240716113922

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240117102150

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-2-27-alpine-20240812140336

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-ba8ada4-20240405150338

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240523075821

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.25.0-42-g8710cbe

metallb-controller Updated

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-dfbd1a68-amd64

metallb-speaker Updated

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-dfbd1a68-amd64

syslog-ng Updated

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20240806124545

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.40.21.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.40.21.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.40.21.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.40.21.tgz

byo-provider Unsupported

https://binary.mirantis.com/core/helm/byo-provider-1.40.21.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.40.21.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.40.21.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.40.21.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.40.21.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.40.21.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.40.21.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.40.21.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.40.21.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.40.21.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.40.21.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.40.21.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.40.21.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.40.21.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.40.21.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.40.21.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.40.21.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.40.21.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.40.21.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.40.21.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.40.21.tgz

secret-controller

https://binary.mirantis.com/core/helm/secret-controller-1.40.21.tgz

squid-proxy Unsupported

https://binary.mirantis.com/core/helm/squid-proxy-1.40.21.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.40.21.tgz

vsphere-credentials-controller Unsupported

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.40.21.tgz

vsphere-provider Unsupported

https://binary.mirantis.com/core/helm/vsphere-provider-1.40.21.tgz

vsphere-vm-template-controller Unsupported

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.40.21.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.40.21

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.40.21

byo-cluster-api-controller Unsupported

mirantis.azurecr.io/core/byo-cluster-api-controller:1.40.21

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.40.21

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-6

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.40.21

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.40.21

frontend Updated

mirantis.azurecr.io/core/frontend:1.40.21

host-os-modules-controller Updated

mirantis.azurecr.io/core/host-os-modules-controller:1.40.21

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.40.21

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.40.21

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.40.21

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.40.21

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.40.21

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.40.21

mcc-cache-warmup Updated

mirantis.azurecr.io/core/mcc-cache-warmup:1.40.21

mcc-haproxy Updated

mirantis.azurecr.io/lcm/mcc-haproxy:v0.25.0-42-g8710cbe

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.25.0-42-g8710cbe

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.40.21

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.40.21

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.40.21

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.40.21

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.40.21

registry Updated

mirantis.azurecr.io/lcm/registry:v2.8.1-11

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.40.21

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.40.21

secret-controller Updated

mirantis.azurecr.io/core/secret-controller:1.40.21

squid-proxy Unsupported

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.40.21

vsphere-cluster-api-controller Unsupported

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.40.21

vsphere-credentials-controller Unsupported

mirantis.azurecr.io/core/vsphere-credentials-controller:1.40.21

vsphere-vm-template-controller Unsupported

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.40.21

IAM artifacts
IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/core/helm/iam-1.40.21.tgz

Docker images

kubectl

mirantis.azurecr.io/general/kubectl:20240711152257

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240523075821

mcc-keycloak Updated

mirantis.azurecr.io/iam/mcc-keycloak:24.0.5-20240802071408

2.27.2

Important

For MOSK clusters, Container Cloud 2.27.2 is the continuation for MOSK 24.1.x series using the patch Cluster release 17.1.7. For the update path of 24.1 and 24.2 series, see MOSK documentation: Cluster update scheme.

The management cluster of a MOSK 24.1, 24.1.5, or 24.1.6 cluster is automatically updated to the latest patch Cluster release 16.2.2.

The Container Cloud patch release 2.27.2, which is based on the 2.27.0 major release, provides the following updates:

  • Support for the patch Cluster release 16.2.2.

  • Support for the patch Cluster releases 16.1.7 and 17.1.7 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 24.1.7.

  • Support for MKE 3.7.11.

  • Bare metal: update of Ubuntu mirror from ubuntu-2024-06-27-095142 to ubuntu-2024-07-16-014744 along with update of minor kernel version from 5.15.0-113-generic to 5.15.0-116-generic (Cluster release 16.2.2).

  • Security fixes for CVEs in images.

This patch release also supports the latest major Cluster releases 17.2.0 and 16.2.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster release instead.

For main deliverables of the parent Container Cloud release of 2.27.2, refer to 2.27.0.

Security notes

In total, since Container Cloud 2.27.1, 95 Common Vulnerabilities and Exposures (CVE) have been fixed in 2.27.2: 6 of critical and 89 of high severity.

The table below includes the total numbers of addressed unique and common CVEs in images by product component since Container Cloud 2.27.1. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Kaas core

Unique

5

26

31

Common

6

69

75

StackLight

Unique

0

3

3

Common

0

20

20

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.1.7: Security notes.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.27.2 including the Cluster releases 16.2.2, 16.1.7, and 17.1.7.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[47202] Inspection error on bare metal hosts after dnsmasq restart

Note

Moving forward, the workaround for this issue will be moved from Release Notes to MOSK Troubleshooting Guide: Inspection error on bare metal hosts after dnsmasq restart.

If the dnsmasq pod is restarted during the bootstrap of newly added nodes, those nodes may fail to undergo inspection. That can result in inspection error in the corresponding BareMetalHost objects.

The issue can occur when:

  • The dnsmasq pod was moved to another node.

  • DHCP subnets were changed, including addition or removal. In this case, the dhcpd container of the dnsmasq pod is restarted.

    Caution

    If changing or adding of DHCP subnets is required to bootstrap new nodes, wait after changing or adding DHCP subnets until the dnsmasq pod becomes ready, then create BareMetalHost objects.

To verify whether the nodes are affected:

  1. Verify whether the BareMetalHost objects contain the inspection error:

    kubectl get bmh -n <managed-cluster-namespace-name>
    

    Example of system response:

    NAME            STATE         CONSUMER        ONLINE   ERROR              AGE
    test-master-1   provisioned   test-master-1   true                        9d
    test-master-2   provisioned   test-master-2   true                        9d
    test-master-3   provisioned   test-master-3   true                        9d
    test-worker-1   provisioned   test-worker-1   true                        9d
    test-worker-2   provisioned   test-worker-2   true                        9d
    test-worker-3   inspecting                    true     inspection error   19h
    
  2. Verify whether the dnsmasq pod was in Ready state when the inspection of the affected baremetal hosts (test-worker-3 in the example above) was started:

    kubectl -n kaas get pod <dnsmasq-pod-name> -oyaml
    

    Example of system response:

    ...
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: Initialized
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: Ready
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: ContainersReady
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: PodScheduled
      containerStatuses:
      - containerID: containerd://6dbcf2fc4b36ce4c549c9191ab01f72d0236c51d42947675302675e4bfaf4cdf
        image: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq:base-2-28-alpine-20240812132650
        imageID: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq@sha256:3dad3e278add18e69b2608e462691c4823942641a0f0e25e6811e703e3c23b3b
        lastState:
          terminated:
            containerID: containerd://816fcf079cd544acd74e312065de5b5ed4dbf1dc6159fefffff4f644b5e45987
            exitCode: 0
            finishedAt: "2024-10-11T07:38:35Z"
            reason: Completed
            startedAt: "2024-10-10T15:37:45Z"
        name: dhcpd
        ready: true
        restartCount: 2
        started: true
        state:
          running:
            startedAt: "2024-10-11T07:38:37Z"
      ...
    

    In the system response above, the dhcpd container was not ready between "2024-10-11T07:38:35Z" and "2024-10-11T07:38:54Z".

  3. Verify the affected baremetal host. For example:

    kubectl get bmh -n managed-ns test-worker-3 -oyaml
    

    Example of system response:

    ...
    status:
      errorCount: 15
      errorMessage: Introspection timeout
      errorType: inspection error
      ...
      operationHistory:
        deprovision:
          end: null
          start: null
        inspect:
          end: null
          start: "2024-10-11T07:38:19Z"
        provision:
          end: null
          start: null
        register:
          end: "2024-10-11T07:38:19Z"
          start: "2024-10-11T07:37:25Z"
    

    In the system response above, inspection was started at "2024-10-11T07:38:19Z", immediately before the period of the dhcpd container downtime. Therefore, this node is most likely affected by the issue.

Workaround

  1. Reboot the node using the IPMI reset or cycle command.

  2. If the node fails to boot, remove the failed BareMetalHost object and create it again:

    1. Remove BareMetalHost object. For example:

      kubectl delete bmh -n managed-ns test-worker-3
      
    2. Verify that the BareMetalHost object is removed:

      kubectl get bmh -n managed-ns test-worker-3
      
    3. Create a BareMetalHost object from the template. For example:

      kubectl create -f bmhc-test-worker-3.yaml
      kubectl create -f bmh-test-worker-3.yaml
      
[46245] Lack of access permissions for HOC and HOCM objects

Fixed in 2.28.0 (17.3.0 and 16.3.0)

When trying to list the HostOSConfigurationModules and HostOSConfiguration custom resources, serviceuser or a user with the global-admin or operator role obtains the access denied error. For example:

kubectl --kubeconfig ~/.kube/mgmt-config get hocm

Error from server (Forbidden): hostosconfigurationmodules.kaas.mirantis.com is forbidden:
User "2d74348b-5669-4c65-af31-6c05dbedac5f" cannot list resource "hostosconfigurationmodules"
in API group "kaas.mirantis.com" at the cluster scope: access denied

Workaround:

  1. Modify the global-admin role by adding a new entry with the following contents to the rules list:

    kubectl edit clusterroles kaas-global-admin
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurationmodules]
      verbs: ['*']
    
  2. For each Container Cloud project, modify the kaas-operator role by adding a new entry with the following contents to the rules list:

    kubectl -n <projectName> edit roles kaas-operator
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurations]
      verbs: ['*']
    
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[41305] DHCP responses are lost between dnsmasq and dhcp-relay pods

Fixed in 2.28.0 (17.3.0 and 16.3.0)

After node maintenance of a management cluster, the newly added nodes may fail to undergo provisioning successfully. The issue relates to new nodes that are in the same L2 domain as the management cluster.

The issue was observed on environments having management cluster nodes configured with a single L2 segment used for all network traffic (PXE and LCM/management networks).

To verify whether the cluster is affected:

Verify whether the dnsmasq and dhcp-relay pods run on the same node in the management cluster:

kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"

Example of system response:

dhcp-relay-7d85f75f76-5vdw2   2/2   Running   2 (36h ago)   36h   10.10.0.122     kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (36h ago)   36h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>

If this is the case, proceed to the workaround below.

Workaround:

  1. Log in to a node that contains kubeconfig of the affected management cluster.

  2. Make sure that at least two management cluster nodes are schedulable:

    kubectl get node
    

    Example of a positive system response:

    NAME                                             STATUS   ROLES    AGE   VERSION
    kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-ad5a6f51-b98f-43c3-91d5-55fed3d0ff21   Ready    master   37h   v1.27.10-mirantis-1
    
  3. Delete the dhcp-relay pod:

    kubectl -n kaas delete pod <dhcp-relay-xxxxx>
    
  4. Verify that the dnsmasq and dhcp-relay pods are scheduled into different nodes:

    kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"
    

    Example of a positive system response:

    dhcp-relay-7d85f75f76-rkv03   2/2   Running   0             49s   10.10.0.121     kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   <none>   <none>
    dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (37h ago)   37h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


LCM
[39437] Failure to replace a master node on a Container Cloud cluster

Fixed in 2.29.0 (17.4.0 and 16.4.0)

During the replacement of a master node on a cluster of any type, the process may get stuck with Kubelet's NodeReady condition is Unknown in the machine status on the remaining master nodes.

As a workaround, log in on the affected node and run the following command:

docker restart ucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>

Ceph
[50566] Ceph upgrade is very slow during patch or major cluster update

Due to the upstream Ceph issue 66717, during CVE upgrade of the Ceph daemon image of Ceph Reef 18.2.4, OSDs may start slow and even fail the starting probe with the following describe output in the rook-ceph-osd-X pod:

 Warning  Unhealthy  57s (x16 over 3m27s)  kubelet  Startup probe failed:
 ceph daemon health check failed with the following output:
> no valid command found; 10 closest matches:
> 0
> 1
> 2
> abort
> assert
> bluefs debug_inject_read_zeros
> bluefs files list
> bluefs stats
> bluestore bluefs device info [<alloc_size:int>]
> config diff
> admin_socket: invalid command

Workaround:

Complete the following steps during every patch or major cluster update of the Cluster releases 17.2.x, 17.3.x, and 17.4.x (until Ceph 18.2.5 becomes supported):

  1. Plan extra time in the maintenance window for the patch cluster update.

    Slow starts will still impact the update procedure, but after completing the following step, the recovery process noticeably shortens without affecting the overall cluster state and data responsiveness.

  2. Select one of the following options:

    • Before the cluster update, set the noout flag:

      ceph osd set noout
      

      Once the Ceph OSDs image upgrade is done, unset the flag:

      ceph osd unset noout
      
    • Monitor the Ceph OSDs image upgrade. If the symptoms of slow start appear, set the noout flag as soon as possible. Once the Ceph OSDs image upgrade is done, unset the flag.

[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.

Container Cloud web UI
[50181] Failure to deploy a compact cluster

A compact MOSK cluster fails to be deployed through the Container Cloud web UI due to inability to add any label to the control plane machines along with inability to change dedicatedControlPlane: false using the web UI.

To work around the issue, manually add the required labels using CLI. Once done, the cluster deployment resumes.

[50168] Inability to use a new project right after creation

A newly created project does not display all available tabs in the Container Cloud web UI and contains different access denied errors during first five minutes after creation.

To work around the issue, refresh the browser in five minutes after the project creation.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.27.2. For artifacts of the Cluster releases introduced in 2.27.2, see patch Cluster releases 16.2.2, 16.1.7, and 17.1.7.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries Updated

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20240716085444

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20240716085444

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.40.18.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.40.18.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.40.18.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.40.18.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.40.18.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.40.18.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.40.18

baremetal-dnsmasq

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-27-alpine-20240701130209

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-27-alpine-20240711081559

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-2-27-alpine-20240701130719

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.40.18

ironic Updated

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:antelope-jammy-20240716113922

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240117102150

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-2-27-alpine-20240701133222

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-ba8ada4-20240405150338

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240523075821

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.25.0-40-g890ffca

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-e86184d9-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-e86184d9-amd64

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20240701125905

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.40.18.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.40.18.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.40.18.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.40.18.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.40.18.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.40.18.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.40.18.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.40.18.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.40.18.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.40.18.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.40.18.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.40.18.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.40.18.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.40.18.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.40.18.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.40.18.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.40.18.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.40.18.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.40.18.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.40.18.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.40.18.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.40.18.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.40.18.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.40.18.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.40.18.tgz

secret-controller

https://binary.mirantis.com/core/helm/secret-controller-1.40.18.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.40.18.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.40.18.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.40.18.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.40.18.tgz

vsphere-vm-template-controller

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.40.18.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.40.18

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.40.18

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.40.18

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.40.18

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-6

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.40.18

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.40.18

frontend Updated

mirantis.azurecr.io/core/frontend:1.40.18

host-os-modules-controller Updated

mirantis.azurecr.io/core/host-os-modules-controller:1.40.18

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.40.18

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.40.18

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.40.18

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.40.18

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.40.18

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.40.18

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.25.0-40-g890ffca

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.25.0-40-g890ffca

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.40.18

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.40.18

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.40.18

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.40.18

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.40.18

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-10

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.40.18

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.40.18

secret-controller Updated

mirantis.azurecr.io/core/secret-controller:1.40.18

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.40.18

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.40.18

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.40.18

vsphere-vm-template-controller Updated

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.40.18

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/core/helm/iam-1.40.18.tgz

Docker images

kubectl Updated

mirantis.azurecr.io/general/kubectl:20240711152257

kubernetes-entrypoint Removed

n/a

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240523075821

mcc-keycloak

mirantis.azurecr.io/iam/mcc-keycloak:24.0.5-20240621131831

2.27.1

Important

For MOSK clusters, Container Cloud 2.27.1 is the continuation for MOSK 24.1.x series using the patch Cluster release 17.1.6. For the update path of 24.1 and 24.2 series, see MOSK documentation: Cluster update scheme.

The management cluster of a MOSK 24.1 or 24.1.5 cluster is automatically updated to the latest patch Cluster release 16.2.1.

The Container Cloud patch release 2.27.1, which is based on the 2.27.0 major release, provides the following updates:

  • Support for the patch Cluster release 16.2.1.

  • Support for the patch Cluster releases 16.1.6 and 17.1.6 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 24.1.6.

  • Support for MKE 3.7.10.

  • Support for docker-ee-cli 23.0.13 in MCR 23.0.11 to fix several CVEs.

  • Bare metal: update of Ubuntu mirror from ubuntu-2024-05-17-013445 to ubuntu-2024-06-27-095142 along with update of minor kernel version from 5.15.0-107-generic to 5.15.0-113-generic.

  • Security fixes for CVEs in images.

  • Bug fixes.

This patch release also supports the latest major Cluster releases 17.2.0 and 16.2.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster release instead.

For main deliverables of the parent Container Cloud release of 2.27.1, refer to 2.27.0.

Security notes

In total, since Container Cloud 2.27.0, 270 Common Vulnerabilities and Exposures (CVE) of high severity have been fixed in 2.27.1.

The table below includes the total numbers of addressed unique and common CVEs in images by product component since Container Cloud 2.27.0. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

0

6

6

Common

0

29

29

Kaas core

Unique

0

10

10

Common

0

178

178

StackLight

Unique

0

14

14

Common

0

63

63

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.1.6: Security notes.

Addressed issues

The following issues have been addressed in the Container Cloud patch release 2.27.1 along with the patch Cluster releases 16.2.1, 16.1.6, and 17.1.6.

  • [42304] [StackLight] [Cluster releases 17.1.6, 16.1.6] Fixed the issue with failure of shard relocation in the OpenSearch cluster on large Container Cloud managed clusters.

  • [40020] [StackLight] [Cluster releases 17.1.6, 16.1.6] Fixed the issue with rollover_policy not being applied to the current indices while updating the policy for the current system* and audit* data streams.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.27.1 including the Cluster releases 16.2.1, 16.1.6, and 17.1.6.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[47202] Inspection error on bare metal hosts after dnsmasq restart

Note

Moving forward, the workaround for this issue will be moved from Release Notes to MOSK Troubleshooting Guide: Inspection error on bare metal hosts after dnsmasq restart.

If the dnsmasq pod is restarted during the bootstrap of newly added nodes, those nodes may fail to undergo inspection. That can result in inspection error in the corresponding BareMetalHost objects.

The issue can occur when:

  • The dnsmasq pod was moved to another node.

  • DHCP subnets were changed, including addition or removal. In this case, the dhcpd container of the dnsmasq pod is restarted.

    Caution

    If changing or adding of DHCP subnets is required to bootstrap new nodes, wait after changing or adding DHCP subnets until the dnsmasq pod becomes ready, then create BareMetalHost objects.

To verify whether the nodes are affected:

  1. Verify whether the BareMetalHost objects contain the inspection error:

    kubectl get bmh -n <managed-cluster-namespace-name>
    

    Example of system response:

    NAME            STATE         CONSUMER        ONLINE   ERROR              AGE
    test-master-1   provisioned   test-master-1   true                        9d
    test-master-2   provisioned   test-master-2   true                        9d
    test-master-3   provisioned   test-master-3   true                        9d
    test-worker-1   provisioned   test-worker-1   true                        9d
    test-worker-2   provisioned   test-worker-2   true                        9d
    test-worker-3   inspecting                    true     inspection error   19h
    
  2. Verify whether the dnsmasq pod was in Ready state when the inspection of the affected baremetal hosts (test-worker-3 in the example above) was started:

    kubectl -n kaas get pod <dnsmasq-pod-name> -oyaml
    

    Example of system response:

    ...
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: Initialized
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: Ready
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: ContainersReady
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: PodScheduled
      containerStatuses:
      - containerID: containerd://6dbcf2fc4b36ce4c549c9191ab01f72d0236c51d42947675302675e4bfaf4cdf
        image: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq:base-2-28-alpine-20240812132650
        imageID: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq@sha256:3dad3e278add18e69b2608e462691c4823942641a0f0e25e6811e703e3c23b3b
        lastState:
          terminated:
            containerID: containerd://816fcf079cd544acd74e312065de5b5ed4dbf1dc6159fefffff4f644b5e45987
            exitCode: 0
            finishedAt: "2024-10-11T07:38:35Z"
            reason: Completed
            startedAt: "2024-10-10T15:37:45Z"
        name: dhcpd
        ready: true
        restartCount: 2
        started: true
        state:
          running:
            startedAt: "2024-10-11T07:38:37Z"
      ...
    

    In the system response above, the dhcpd container was not ready between "2024-10-11T07:38:35Z" and "2024-10-11T07:38:54Z".

  3. Verify the affected baremetal host. For example:

    kubectl get bmh -n managed-ns test-worker-3 -oyaml
    

    Example of system response:

    ...
    status:
      errorCount: 15
      errorMessage: Introspection timeout
      errorType: inspection error
      ...
      operationHistory:
        deprovision:
          end: null
          start: null
        inspect:
          end: null
          start: "2024-10-11T07:38:19Z"
        provision:
          end: null
          start: null
        register:
          end: "2024-10-11T07:38:19Z"
          start: "2024-10-11T07:37:25Z"
    

    In the system response above, inspection was started at "2024-10-11T07:38:19Z", immediately before the period of the dhcpd container downtime. Therefore, this node is most likely affected by the issue.

Workaround

  1. Reboot the node using the IPMI reset or cycle command.

  2. If the node fails to boot, remove the failed BareMetalHost object and create it again:

    1. Remove BareMetalHost object. For example:

      kubectl delete bmh -n managed-ns test-worker-3
      
    2. Verify that the BareMetalHost object is removed:

      kubectl get bmh -n managed-ns test-worker-3
      
    3. Create a BareMetalHost object from the template. For example:

      kubectl create -f bmhc-test-worker-3.yaml
      kubectl create -f bmh-test-worker-3.yaml
      
[46245] Lack of access permissions for HOC and HOCM objects

Fixed in 2.28.0 (17.3.0 and 16.3.0)

When trying to list the HostOSConfigurationModules and HostOSConfiguration custom resources, serviceuser or a user with the global-admin or operator role obtains the access denied error. For example:

kubectl --kubeconfig ~/.kube/mgmt-config get hocm

Error from server (Forbidden): hostosconfigurationmodules.kaas.mirantis.com is forbidden:
User "2d74348b-5669-4c65-af31-6c05dbedac5f" cannot list resource "hostosconfigurationmodules"
in API group "kaas.mirantis.com" at the cluster scope: access denied

Workaround:

  1. Modify the global-admin role by adding a new entry with the following contents to the rules list:

    kubectl edit clusterroles kaas-global-admin
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurationmodules]
      verbs: ['*']
    
  2. For each Container Cloud project, modify the kaas-operator role by adding a new entry with the following contents to the rules list:

    kubectl -n <projectName> edit roles kaas-operator
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurations]
      verbs: ['*']
    
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[41305] DHCP responses are lost between dnsmasq and dhcp-relay pods

Fixed in 2.28.0 (17.3.0 and 16.3.0)

After node maintenance of a management cluster, the newly added nodes may fail to undergo provisioning successfully. The issue relates to new nodes that are in the same L2 domain as the management cluster.

The issue was observed on environments having management cluster nodes configured with a single L2 segment used for all network traffic (PXE and LCM/management networks).

To verify whether the cluster is affected:

Verify whether the dnsmasq and dhcp-relay pods run on the same node in the management cluster:

kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"

Example of system response:

dhcp-relay-7d85f75f76-5vdw2   2/2   Running   2 (36h ago)   36h   10.10.0.122     kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (36h ago)   36h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>

If this is the case, proceed to the workaround below.

Workaround:

  1. Log in to a node that contains kubeconfig of the affected management cluster.

  2. Make sure that at least two management cluster nodes are schedulable:

    kubectl get node
    

    Example of a positive system response:

    NAME                                             STATUS   ROLES    AGE   VERSION
    kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-ad5a6f51-b98f-43c3-91d5-55fed3d0ff21   Ready    master   37h   v1.27.10-mirantis-1
    
  3. Delete the dhcp-relay pod:

    kubectl -n kaas delete pod <dhcp-relay-xxxxx>
    
  4. Verify that the dnsmasq and dhcp-relay pods are scheduled into different nodes:

    kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"
    

    Example of a positive system response:

    dhcp-relay-7d85f75f76-rkv03   2/2   Running   0             49s   10.10.0.121     kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   <none>   <none>
    dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (37h ago)   37h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


LCM
[39437] Failure to replace a master node on a Container Cloud cluster

Fixed in 2.29.0 (17.4.0 and 16.4.0)

During the replacement of a master node on a cluster of any type, the process may get stuck with Kubelet's NodeReady condition is Unknown in the machine status on the remaining master nodes.

As a workaround, log in on the affected node and run the following command:

docker restart ucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>

Ceph
[50566] Ceph upgrade is very slow during patch or major cluster update

Due to the upstream Ceph issue 66717, during CVE upgrade of the Ceph daemon image of Ceph Reef 18.2.4, OSDs may start slow and even fail the starting probe with the following describe output in the rook-ceph-osd-X pod:

 Warning  Unhealthy  57s (x16 over 3m27s)  kubelet  Startup probe failed:
 ceph daemon health check failed with the following output:
> no valid command found; 10 closest matches:
> 0
> 1
> 2
> abort
> assert
> bluefs debug_inject_read_zeros
> bluefs files list
> bluefs stats
> bluestore bluefs device info [<alloc_size:int>]
> config diff
> admin_socket: invalid command

Workaround:

Complete the following steps during every patch or major cluster update of the Cluster releases 17.2.x, 17.3.x, and 17.4.x (until Ceph 18.2.5 becomes supported):

  1. Plan extra time in the maintenance window for the patch cluster update.

    Slow starts will still impact the update procedure, but after completing the following step, the recovery process noticeably shortens without affecting the overall cluster state and data responsiveness.

  2. Select one of the following options:

    • Before the cluster update, set the noout flag:

      ceph osd set noout
      

      Once the Ceph OSDs image upgrade is done, unset the flag:

      ceph osd unset noout
      
    • Monitor the Ceph OSDs image upgrade. If the symptoms of slow start appear, set the noout flag as soon as possible. Once the Ceph OSDs image upgrade is done, unset the flag.

[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.

Container Cloud web UI
[50181] Failure to deploy a compact cluster

A compact MOSK cluster fails to be deployed through the Container Cloud web UI due to inability to add any label to the control plane machines along with inability to change dedicatedControlPlane: false using the web UI.

To work around the issue, manually add the required labels using CLI. Once done, the cluster deployment resumes.

[50168] Inability to use a new project right after creation

A newly created project does not display all available tabs in the Container Cloud web UI and contains different access denied errors during first five minutes after creation.

To work around the issue, refresh the browser in five minutes after the project creation.

Update notes

This section describes the specific actions you as a cloud operator need to complete before or after your Container Cloud cluster update to the Cluster releases 17.1.6, 16.2.1, or 16.1.6.

Consider this information as a supplement to the generic update procedures published in Operations Guide: Automatic upgrade of a management cluster and Update a patch Cluster release of a managed cluster.

Post-update actions
Prepare for changing label values in Ceph metrics used in customizations

Note

If you do not use Ceph metrics in any customizations, for example, custom alerts, Grafana dashboards, or queries in custom workloads, skip this section.

After deprecating the performance metric exporter that is integrated into the Ceph Manager daemon for the sake of the dedicated Ceph Exporter daemon in Container Cloud 2.27.0, you may need to prepare for updating values of several labels in Ceph metrics if you use them in any customizations such as custom alerts, Grafana dashboards, or queries in custom tools. These labels will be changed in Container Cloud 2.28.0 (Cluster releases 16.3.0 and 17.3.0).

Note

Names of metrics will not be changed, no metrics will be removed.

All Ceph metrics to be collected by the Ceph Exporter daemon will change their labels job and instance due to scraping metrics from new Ceph Exporter daemon instead of the performance metric exporter of Ceph Manager:

  • Values of the job labels will be changed from rook-ceph-mgr to prometheus-rook-exporter for all Ceph metrics moved to Ceph Exporter. The full list of moved metrics is presented below.

  • Values of the instance labels will be changed from the metric endpoint of Ceph Manager with port 9283 to the metric endpoint of Ceph Exporter with port 9926 for all Ceph metrics moved to Ceph Exporter. The full list of moved metrics is presented below.

  • Values of the instance_id labels of Ceph metrics from the RADOS Gateway (RGW) daemons will be changed from the daemon GID to the daemon subname. For example, instead of instance_id="<RGW_PROCESS_GID>", the instance_id="a" (ceph_rgw_qlen{instance_id="a"}) will be used. The list of moved Ceph RGW metrics is presented below.

List of affected Ceph RGW metrics
  • ceph_rgw_cache_.*

  • ceph_rgw_failed_req

  • ceph_rgw_gc_retire_object

  • ceph_rgw_get.*

  • ceph_rgw_keystone_.*

  • ceph_rgw_lc_.*

  • ceph_rgw_lua_.*

  • ceph_rgw_pubsub_.*

  • ceph_rgw_put.*

  • ceph_rgw_qactive

  • ceph_rgw_qlen

  • ceph_rgw_req

List of all metrics to be collected by Ceph Exporter instead of Ceph Manager
  • ceph_bluefs_.*

  • ceph_bluestore_.*

  • ceph_mds_cache_.*

  • ceph_mds_caps

  • ceph_mds_ceph_.*

  • ceph_mds_dir_.*

  • ceph_mds_exported_inodes

  • ceph_mds_forward

  • ceph_mds_handle_.*

  • ceph_mds_imported_inodes

  • ceph_mds_inodes.*

  • ceph_mds_load_cent

  • ceph_mds_log_.*

  • ceph_mds_mem_.*

  • ceph_mds_openino_dir_fetch

  • ceph_mds_process_request_cap_release

  • ceph_mds_reply_.*

  • ceph_mds_request

  • ceph_mds_root_.*

  • ceph_mds_server_.*

  • ceph_mds_sessions_.*

  • ceph_mds_slow_reply

  • ceph_mds_subtrees

  • ceph_mon_election_.*

  • ceph_mon_num_.*

  • ceph_mon_session_.*

  • ceph_objecter_.*

  • ceph_osd_numpg.*

  • ceph_osd_op.*

  • ceph_osd_recovery_.*

  • ceph_osd_stat_.*

  • ceph_paxos.*

  • ceph_prioritycache.*

  • ceph_purge.*

  • ceph_rgw_cache_.*

  • ceph_rgw_failed_req

  • ceph_rgw_gc_retire_object

  • ceph_rgw_get.*

  • ceph_rgw_keystone_.*

  • ceph_rgw_lc_.*

  • ceph_rgw_lua_.*

  • ceph_rgw_pubsub_.*

  • ceph_rgw_put.*

  • ceph_rgw_qactive

  • ceph_rgw_qlen

  • ceph_rgw_req

  • ceph_rocksdb_.*

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.27.1. For artifacts of the Cluster releases introduced in 2.27.1, see patch Cluster releases 16.2.1, 16.1.6, and 17.1.6.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries Updated

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20240627104414

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20240627104414

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.40.15.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.40.15.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.40.15.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.40.15.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.40.15.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.40.15.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.40.15

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-27-alpine-20240701130209

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-27-alpine-20240701130001

bm-collective Updated

mirantis.azurecr.io/bm/bm-collective:base-2-27-alpine-20240701130719

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.40.15

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240522120643

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:antelope-jammy-20240522120643

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240117102150

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-2-27-alpine-20240701133222

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-ba8ada4-20240405150338

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240523075821

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.25.0-40-g890ffca

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-e86184d9-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-e86184d9-amd64

syslog-ng Updated

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20240701125905

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.40.15.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.40.15.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.40.15.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.40.15.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.40.15.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.40.15.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.40.15.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.40.15.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.40.15.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.40.15.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.40.15.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.40.15.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.40.15.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.40.15.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.40.15.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.40.15.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.40.15.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.40.15.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.40.15.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.40.15.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.40.15.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.40.15.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.40.15.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.40.15.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.40.15.tgz

secret-controller

https://binary.mirantis.com/core/helm/secret-controller-1.40.15.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.40.15.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.40.15.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.40.15.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.40.15.tgz

vsphere-vm-template-controller

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.40.15.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.40.15

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.40.15

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.40.15

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.40.15

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-6

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.40.15

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.40.15

frontend Updated

mirantis.azurecr.io/core/frontend:1.40.15

host-os-modules-controller Updated

mirantis.azurecr.io/core/host-os-modules-controller:1.40.15

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.40.15

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.40.15

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.40.15

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.40.15

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.40.15

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.40.15

mcc-haproxy Updated

mirantis.azurecr.io/lcm/mcc-haproxy:v0.25.0-40-g890ffca

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.25.0-40-g890ffca

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.40.15

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.40.15

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.40.15

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.40.15

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.40.15

registry Updated

mirantis.azurecr.io/lcm/registry:v2.8.1-10

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.40.15

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.40.15

secret-controller Updated

mirantis.azurecr.io/core/secret-controller:1.40.15

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.40.15

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.40.15

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.40.15

vsphere-vm-template-controller Updated

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.40.15

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/core/helm/iam-1.40.15.tgz

Docker images

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.22-20240501023013

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-ba8ada4-20240405150338

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240523075821

mcc-keycloak Updated

mirantis.azurecr.io/iam/mcc-keycloak:24.0.5-20240621131831

2.27.0

The Mirantis Container Cloud major release 2.27.0:

  • Introduces support for the Cluster release 17.2.0 that is based on the Cluster release 16.2.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 24.2.

  • Introduces support for the Cluster release 16.2.0 that is based on Mirantis Container Runtime (MCR) 23.0.11 and Mirantis Kubernetes Engine (MKE) 3.7.8 with Kubernetes 1.27.

  • Does not support greenfield deployments on deprecated Cluster releases of the 17.1.x and 16.1.x series. Use the latest available Cluster releases of the series instead.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.27.0.

Enhancements

This section outlines new features and enhancements introduced in the Container Cloud release 2.27.0. For the list of enhancements delivered with the Cluster releases introduced by Container Cloud 2.27.0, see 17.2.0 and 16.2.0.

General availability for Ubuntu 22.04 on bare metal clusters

Implemented full support for Ubuntu 22.04 LTS (Jellyfish) as the default host operating system that now installs on non-MOSK bare metal management and managed clusters.

For MOSK:

  • Existing management clusters are automatically updated to Ubuntu 22.04 during cluster upgrade to Container Cloud 2.27.0 (Cluster release 16.2.0).

  • Greenfield deployments of management clusters are based on Ubuntu 22.04.

  • Existing and greenfield deployments of managed clusters are still based on Ubuntu 20.04. The support for Ubuntu 22.04 on this cluster type will be announced in one of the following releases.

Caution

Upgrading from Ubuntu 20.04 to 22.04 on existing deployments of Container Cloud managed clusters is not supported.

Improvements in the day-2 management API for bare metal clusters

TechPreview

Enhanced the day-2 management API the bare metal provider with several key improvements:

  • Implemented the sysctl, package, and irqbalance configuration modules, which become available for usage after your management cluster upgrade to the Cluster release 16.2.0. These Container Cloud modules use the designated HostOSConfiguration object named mcc-modules to distingish them from custom modules.

    Configuration modules allow managing the operating system of a bare metal host granularly without rebuilding the node from scratch. Such approach prevents workload evacuation and significantly reduces configuration time.

  • Optimized performance for faster, more efficient operations.

  • Enhanced user experience for easier and more intuitive interactions.

  • Resolved various internal issues to ensure smoother functionality.

  • Added comprehensive documentation, including concepts, guidelines, and recommendations for effective use of day-2 operations.

Optimization of strict filtering for devices on bare metal clusters

Optimized the BareMetalHostProfile custom resource, which uses the strict byID filtering to target system disks using the byPath, serialNumber, and wwn reliable device options instead of the unpredictable byName naming format.

The optimization includes changes in admission-controller that now blocks the use of bmhp:spec:devices:by_name in new BareMetalHostProfile objects.

Deprecation of SubnetPool and MetalLBConfigTemplate objects

As part of refactoring of the bare metal provider, deprecated the SubnetPool and MetalLBConfigTemplate objects. The objects will be completely removed from the product in one of the following releases.

Both objects are automatically migrated to the MetallbConfig object during cluster update to the Cluster release 17.2.0 or 16.2.0.

Learn more

Deprecation notes

The ClusterUpdatePlan object for a granular cluster update

TechPreview

Implemented the ClusterUpdatePlan custom resource to enable a granular step-by-step update of a managed cluster. The operator can control the update process by manually launching update stages using the commence flag. Between the update stages, a cluster remains functional from the perspective of cloud users and workloads.

A ClusterUpdatePlan object is automatically created by the respective Container Cloud provider when a new Cluster release becomes available for your cluster. This object contains a list of predefined self-descriptive update steps that are cluster-specific. These steps are defined in the spec section of the object with information about their impact on the cluster.

Update groups for worker machines

Implemented the UpdateGroup custom resource for creation of update groups for worker machines on managed clusters. The use of update groups provides enhanced control over update of worker machines. This feature decouples the concurrency settings from the global cluster level, providing update flexibility based on the workload characteristics of different worker machine sets.

LCM Agent heartbeats

Implemented the same heartbeat model for the LCM Agent as Kubernetes uses for Nodes. This model allows reflecting the actual status of the LCM Agent when it fails. For visual representation, added the corresponding LCM Agent status to the Container Cloud web UI for clusters and machines, which reflects health status of the LCM agent along with its status of update to the version from the current Cluster release.

Handling secret leftovers using secret-controller

Implemented secret-controller that runs on a management cluster and cleans up secret leftovers of credentials that are not cleaned up automatically after creation of new secrets. This controller replaces rhellicense-controller, proxy-controller, and byo-credentials-controller as well as partially replaces the functionality of license-controller and other credential controllers.

Note

You can change memory limits for secret-controller on a management cluster using the resources:limits parameter in the spec:providerSpec:value:kaas:management:helmReleases: section of the Cluster object.

MariaDB backup for bare metal and vSphere providers

Implemented the capability to back up and restore MariaDB databases on management clusters for bare metal and vSphere providers. Also, added documentation on how to change the storage node for backups on clusters of these provider types.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.27.0 along with the Cluster releases 17.2.0 and 16.2.0.

Note

This section provides descriptions of issues addressed since the last Container Cloud patch release 2.26.5.

For details on addressed issues in earlier patch releases since 2.26.0, which are also included into the major release 2.27.0, refer to 2.26.x patch releases.

  • [42304] [StackLight] Fixed the issue with failure of shard relocation in the OpenSearch cluster on large Container Cloud managed clusters.

  • [41890] [StackLight] Fixed the issue with Patroni failing to start because of the short default timeout.

  • [40020] [StackLight] Fixed the issue with rollover_policy not being applied to the current indices while updating the policy for the current system* and audit* data streams.

  • [41819] [Ceph] Fixed the issue with the graceful cluster reboot being blocked by active Ceph ClusterWorkloadLock objects.

  • [28865] [LCM] Fixed the issue with validation of the NTP configuration before cluster deployment. Now, deployment does not start until the NTP configuration is validated.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.27.0 including the Cluster releases 17.2.0 and 16.2.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[47202] Inspection error on bare metal hosts after dnsmasq restart

Note

Moving forward, the workaround for this issue will be moved from Release Notes to MOSK Troubleshooting Guide: Inspection error on bare metal hosts after dnsmasq restart.

If the dnsmasq pod is restarted during the bootstrap of newly added nodes, those nodes may fail to undergo inspection. That can result in inspection error in the corresponding BareMetalHost objects.

The issue can occur when:

  • The dnsmasq pod was moved to another node.

  • DHCP subnets were changed, including addition or removal. In this case, the dhcpd container of the dnsmasq pod is restarted.

    Caution

    If changing or adding of DHCP subnets is required to bootstrap new nodes, wait after changing or adding DHCP subnets until the dnsmasq pod becomes ready, then create BareMetalHost objects.

To verify whether the nodes are affected:

  1. Verify whether the BareMetalHost objects contain the inspection error:

    kubectl get bmh -n <managed-cluster-namespace-name>
    

    Example of system response:

    NAME            STATE         CONSUMER        ONLINE   ERROR              AGE
    test-master-1   provisioned   test-master-1   true                        9d
    test-master-2   provisioned   test-master-2   true                        9d
    test-master-3   provisioned   test-master-3   true                        9d
    test-worker-1   provisioned   test-worker-1   true                        9d
    test-worker-2   provisioned   test-worker-2   true                        9d
    test-worker-3   inspecting                    true     inspection error   19h
    
  2. Verify whether the dnsmasq pod was in Ready state when the inspection of the affected baremetal hosts (test-worker-3 in the example above) was started:

    kubectl -n kaas get pod <dnsmasq-pod-name> -oyaml
    

    Example of system response:

    ...
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: Initialized
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: Ready
      - lastProbeTime: null
        lastTransitionTime: "2024-10-11T07:38:54Z"
        status: "True"
        type: ContainersReady
      - lastProbeTime: null
        lastTransitionTime: "2024-10-10T15:37:34Z"
        status: "True"
        type: PodScheduled
      containerStatuses:
      - containerID: containerd://6dbcf2fc4b36ce4c549c9191ab01f72d0236c51d42947675302675e4bfaf4cdf
        image: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq:base-2-28-alpine-20240812132650
        imageID: docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/bm/baremetal-dnsmasq@sha256:3dad3e278add18e69b2608e462691c4823942641a0f0e25e6811e703e3c23b3b
        lastState:
          terminated:
            containerID: containerd://816fcf079cd544acd74e312065de5b5ed4dbf1dc6159fefffff4f644b5e45987
            exitCode: 0
            finishedAt: "2024-10-11T07:38:35Z"
            reason: Completed
            startedAt: "2024-10-10T15:37:45Z"
        name: dhcpd
        ready: true
        restartCount: 2
        started: true
        state:
          running:
            startedAt: "2024-10-11T07:38:37Z"
      ...
    

    In the system response above, the dhcpd container was not ready between "2024-10-11T07:38:35Z" and "2024-10-11T07:38:54Z".

  3. Verify the affected baremetal host. For example:

    kubectl get bmh -n managed-ns test-worker-3 -oyaml
    

    Example of system response:

    ...
    status:
      errorCount: 15
      errorMessage: Introspection timeout
      errorType: inspection error
      ...
      operationHistory:
        deprovision:
          end: null
          start: null
        inspect:
          end: null
          start: "2024-10-11T07:38:19Z"
        provision:
          end: null
          start: null
        register:
          end: "2024-10-11T07:38:19Z"
          start: "2024-10-11T07:37:25Z"
    

    In the system response above, inspection was started at "2024-10-11T07:38:19Z", immediately before the period of the dhcpd container downtime. Therefore, this node is most likely affected by the issue.

Workaround

  1. Reboot the node using the IPMI reset or cycle command.

  2. If the node fails to boot, remove the failed BareMetalHost object and create it again:

    1. Remove BareMetalHost object. For example:

      kubectl delete bmh -n managed-ns test-worker-3
      
    2. Verify that the BareMetalHost object is removed:

      kubectl get bmh -n managed-ns test-worker-3
      
    3. Create a BareMetalHost object from the template. For example:

      kubectl create -f bmhc-test-worker-3.yaml
      kubectl create -f bmh-test-worker-3.yaml
      
[46245] Lack of access permissions for HOC and HOCM objects

Fixed in 2.28.0 (17.3.0 and 16.3.0)

When trying to list the HostOSConfigurationModules and HostOSConfiguration custom resources, serviceuser or a user with the global-admin or operator role obtains the access denied error. For example:

kubectl --kubeconfig ~/.kube/mgmt-config get hocm

Error from server (Forbidden): hostosconfigurationmodules.kaas.mirantis.com is forbidden:
User "2d74348b-5669-4c65-af31-6c05dbedac5f" cannot list resource "hostosconfigurationmodules"
in API group "kaas.mirantis.com" at the cluster scope: access denied

Workaround:

  1. Modify the global-admin role by adding a new entry with the following contents to the rules list:

    kubectl edit clusterroles kaas-global-admin
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurationmodules]
      verbs: ['*']
    
  2. For each Container Cloud project, modify the kaas-operator role by adding a new entry with the following contents to the rules list:

    kubectl -n <projectName> edit roles kaas-operator
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurations]
      verbs: ['*']
    
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[41305] DHCP responses are lost between dnsmasq and dhcp-relay pods

Fixed in 2.28.0 (17.3.0 and 16.3.0)

After node maintenance of a management cluster, the newly added nodes may fail to undergo provisioning successfully. The issue relates to new nodes that are in the same L2 domain as the management cluster.

The issue was observed on environments having management cluster nodes configured with a single L2 segment used for all network traffic (PXE and LCM/management networks).

To verify whether the cluster is affected:

Verify whether the dnsmasq and dhcp-relay pods run on the same node in the management cluster:

kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"

Example of system response:

dhcp-relay-7d85f75f76-5vdw2   2/2   Running   2 (36h ago)   36h   10.10.0.122     kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (36h ago)   36h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>

If this is the case, proceed to the workaround below.

Workaround:

  1. Log in to a node that contains kubeconfig of the affected management cluster.

  2. Make sure that at least two management cluster nodes are schedulable:

    kubectl get node
    

    Example of a positive system response:

    NAME                                             STATUS   ROLES    AGE   VERSION
    kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-ad5a6f51-b98f-43c3-91d5-55fed3d0ff21   Ready    master   37h   v1.27.10-mirantis-1
    
  3. Delete the dhcp-relay pod:

    kubectl -n kaas delete pod <dhcp-relay-xxxxx>
    
  4. Verify that the dnsmasq and dhcp-relay pods are scheduled into different nodes:

    kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"
    

    Example of a positive system response:

    dhcp-relay-7d85f75f76-rkv03   2/2   Running   0             49s   10.10.0.121     kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   <none>   <none>
    dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (37h ago)   37h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


LCM
[39437] Failure to replace a master node on a Container Cloud cluster

Fixed in 2.29.0 (17.4.0 and 16.4.0)

During the replacement of a master node on a cluster of any type, the process may get stuck with Kubelet's NodeReady condition is Unknown in the machine status on the remaining master nodes.

As a workaround, log in on the affected node and run the following command:

docker restart ucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>

Ceph
[50566] Ceph upgrade is very slow during patch or major cluster update

Due to the upstream Ceph issue 66717, during CVE upgrade of the Ceph daemon image of Ceph Reef 18.2.4, OSDs may start slow and even fail the starting probe with the following describe output in the rook-ceph-osd-X pod:

 Warning  Unhealthy  57s (x16 over 3m27s)  kubelet  Startup probe failed:
 ceph daemon health check failed with the following output:
> no valid command found; 10 closest matches:
> 0
> 1
> 2
> abort
> assert
> bluefs debug_inject_read_zeros
> bluefs files list
> bluefs stats
> bluestore bluefs device info [<alloc_size:int>]
> config diff
> admin_socket: invalid command

Workaround:

Complete the following steps during every patch or major cluster update of the Cluster releases 17.2.x, 17.3.x, and 17.4.x (until Ceph 18.2.5 becomes supported):

  1. Plan extra time in the maintenance window for the patch cluster update.

    Slow starts will still impact the update procedure, but after completing the following step, the recovery process noticeably shortens without affecting the overall cluster state and data responsiveness.

  2. Select one of the following options:

    • Before the cluster update, set the noout flag:

      ceph osd set noout
      

      Once the Ceph OSDs image upgrade is done, unset the flag:

      ceph osd unset noout
      
    • Monitor the Ceph OSDs image upgrade. If the symptoms of slow start appear, set the noout flag as soon as possible. Once the Ceph OSDs image upgrade is done, unset the flag.

[42908] The ceph-exporter pods are present in the Ceph crash list

After a managed cluster update, the ceph-exporter pods are present in the ceph crash ls list while rook-ceph-exporter attempts to obtain the port that is still in use. The issue does not block the managed cluster update. Once the port becomes available, rook-ceph-exporter obtains the port and the issue disappears.

As a workaround, run ceph crash archive-all to remove ceph-exporter pods from the Ceph crash list.

[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.


StackLight
[44193] OpenSearch reaches 85% disk usage watermark affecting the cluster state

Fixed in 2.29.0 (17.4.0 and 16.4.0)

On High Availability (HA) clusters that use Local Volume Provisioner (LVP), Prometheus and OpenSearch from StackLight may share the same pool of storage. In such configuration, OpenSearch may approach the 85% disk usage watermark due to the combined storage allocation and usage patterns set by the Persistent Volume Claim (PVC) size parameters for Prometheus and OpenSearch, which consume storage the most.

When the 85% threshold is reached, the affected node is transitioned to the read-only state, preventing shard allocation and causing the OpenSearch cluster state to transition to Warning (Yellow) or Critical (Red).

Caution

The issue and the provided workaround apply only for clusters on which OpenSearch and Prometheus utilize the same storage pool.

To verify that the cluster is affected:

  1. Verify the result of the following formula:

    0.8 × OpenSearch_PVC_Size_GB + Prometheus_PVC_Size_GB > 0.85 × Total_Storage_Capacity_GB
    

    In the formula, define the following values:

    OpenSearch_PVC_Size_GB

    Derived from .values.elasticsearch.persistentVolumeUsableStorageSizeGB, defaulting to .values.elasticsearch.persistentVolumeClaimSize if unspecified. To obtain the OpenSearch PVC size:

    kubectl -n <namespaceName> get cluster <clusterName> -o yaml |\
    yq '.spec.providerSpec.value.helmReleases[] | select(.name == "stacklight") | .values.elasticsearch.persistentVolumeClaimSize '
    

    Example of system response:

    10000Gi
    
    Prometheus_PVC_Size_GB

    Sourced from .values.prometheusServer.persistentVolumeClaimSize. To obtain the Prometheus PVC size:

    kubectl -n <namespaceName> get cluster <clusterName> -o yaml |\
    yq '.spec.providerSpec.value.helmReleases[] | select(.name == "stacklight") | .values.prometheusServer.persistentVolumeClaimSize '
    

    Example of system response:

    4000Gi
    
    Total_Storage_Capacity_GB

    Total capacity of the OpenSearch PVCs. For LVP, the capacity of the storage pool. To obtain the total capacity:

    kubectl get pvc -n stacklight -l app=opensearch-master \
    -o custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage
    

    The system response contains multiple outputs, one per opensearch-master node. Select the capacity for the affected node.

    Note

    Convert the values to GB if they are set in different units.

    If the formula result is positive, it is an early indication that the cluster is affected.

  2. Verify whether the OpenSearchClusterStatusWarning or OpenSearchClusterStatusCritical alert is firing. And if so, verify the following:

    1. Log in to the OpenSearch web UI.

    2. In Management -> Dev Tools, run the following command:

      GET _cluster/allocation/explain
      

      The following system response indicates that the corresponding node is affected:

      "explanation": "the node is above the low watermark cluster setting \
      [cluster.routing.allocation.disk.watermark.low=85%], using more disk space \
      than the maximum allowed [85.0%], actual free: [xx.xxx%]"
      

      Note

      The system response may contain even higher watermark percent than 85.0%, depending on the case.

Workaround:

Warning

The workaround implies adjustement of the retention threshold for OpenSearch. And depending on the new threshold, some old logs will be deleted.

  1. Adjust or set .values.elasticsearch.persistentVolumeUsableStorageSizeGB to a lower value for the affection check formula to be non-positive. For configuration details, see MOSK Operations Guide: StackLight configuration parameters - OpenSearch.

    Mirantis also recommends reserving some space for other PVCs using storage from the pool. Use the following formula to calculate the required space:

    persistentVolumeUsableStorageSizeGB =
    0.84 × ((1 - Reserved_Percentage - Filesystem_Reserve) ×
    Total_Storage_Capacity_GB - Prometheus_PVC_Size_GB) /
    0.8
    

    In the formula, define the following values:

    Reserved_Percentage

    A user-defined variable that specifies what percentage of the total storage capacity should not be used by OpenSearch or Prometheus. This is used to reserve space for other components. It should be expressed as a decimal. For example, for 5% of reservation, Reserved_Percentage is 0.05. Mirantis recommends using 0.05 as a starting point.

    Filesystem_Reserve

    Percentage to deduct for filesystems that may reserve some portion of the available storage, which is marked as occupied. For example, for EXT4, it is 5% by default, so the value must be 0.05.

    Prometheus_PVC_Size_GB

    Sourced from .values.prometheusServer.persistentVolumeClaimSize.

    Total_Storage_Capacity_GB

    Total capacity of the OpenSearch PVCs. For LVP, the capacity of the storage pool. To obtain the total capacity:

    kubectl get pvc -n stacklight -l app=opensearch-master \
    -o custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage
    

    The system response contains multiple outputs, one per opensearch-master node. Select the capacity for the affected node.

    Note

    Convert the values to GB if they are set in different units.

    Calculation of above formula provides a maximum safe storage to allocate for .values.elasticsearch.persistentVolumeUsableStorageSizeGB. Use this formula as a reference for setting .values.elasticsearch.persistentVolumeUsableStorageSizeGB on a cluster.

  2. Wait up to 15-20 mins for OpenSearch to perform the cleaning.

  3. Verify that the cluster is not affected anymore using the procedure above.

[43164] Rollover policy is not added to indicies created without a policy

Fixed in 2.28.0 (17.3.0 and 16.3.0)

The initial index for the system* and audit* data streams can be created without any policy attached due to race condition.

One of indicators that the cluster is most likely affected is the KubeJobFailed alert firing for the elasticsearch-curator job and one or both of the following errors being present in elasticsearch-curator pods that remain in the Error status:

2024-05-31 13:16:04,459 ERROR   Failed to complete action: delete_indices.  \
<class 'curator.exceptions.FailedExecution'>: Exception encountered.  \
Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. \
Exception: RequestError(400, 'illegal_argument_exception', 'index [.ds-system-000001] \
is the write index for data stream [system] and cannot be deleted')

or

2024-05-31 13:16:04,459 ERROR   Failed to complete action: delete_indices.  \
<class 'curator.exceptions.FailedExecution'>: Exception encountered.  \
Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. \
Exception: RequestError(400, 'illegal_argument_exception', 'index [.ds-audit-000001] \
is the write index for data stream [audit] and cannot be deleted')

If the above mentioned alert and errors are present, an immediate action is required, because it indicates that the corresponding index size has already exceeded the space allocated for the index.

To verify that the cluster is affected:

Caution

Verify and apply the workaround to both index patterns, system and audit, separately.

If one of indices is affected, the second one is most likely affected as well. Although in rare cases, only one index may be affected.

  1. Log in to the opensearch-master-0 Pod:

    kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash
    
  2. Verify whether the rollover policy is attached to the index with the 000001 number:

    • system:

      curl localhost:9200/_plugins/_ism/explain/.ds-system-000001
      
    • audit:

      curl localhost:9200/_plugins/_ism/explain/.ds-audit-000001
      

    If the rollover policy is not attached, the cluster is affected. Examples of system responses in an affected cluster:

     {
      ".ds-system-000001": {
        "index.plugins.index_state_management.policy_id": null,
        "index.opendistro.index_state_management.policy_id": null,
        "enabled": null
      },
      "total_managed_indices": 0
    }
    
    {
      ".ds-audit-000001": {
        "index.plugins.index_state_management.policy_id": null,
        "index.opendistro.index_state_management.policy_id": null,
        "enabled": null
      },
      "total_managed_indices": 0
    }
    

Workaround:

  1. Log in to the opensearch-master-0 Pod:

    kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash
    
  2. Add the policy:

    • system:

      curl -XPOST -H "Content-type: application/json" localhost:9200/_plugins/_ism/add/system* -d'{"policy_id":"system_rollover_policy"}'
      
    • audit:

      curl -XPOST -H "Content-type: application/json" localhost:9200/_plugins/_ism/add/audit* -d'{"policy_id":"audit_rollover_policy"}'
      
  3. Perform again the last step of the cluster verification procedure provided above and make sure that the policy is attached to the index.

Container Cloud web UI
[50181] Failure to deploy a compact cluster

A compact MOSK cluster fails to be deployed through the Container Cloud web UI due to inability to add any label to the control plane machines along with inability to change dedicatedControlPlane: false using the web UI.

To work around the issue, manually add the required labels using CLI. Once done, the cluster deployment resumes.

[50168] Inability to use a new project right after creation

A newly created project does not display all available tabs in the Container Cloud web UI and contains different access denied errors during first five minutes after creation.

To work around the issue, refresh the browser in five minutes after the project creation.

Components versions

The following table lists the major components and their versions delivered in Container Cloud 2.27.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Bare metal

baremetal-dnsmasq Updated

base-2-27-alpine-20240523143049

baremetal-operator Updated

base-2-27-alpine-20240523142757

baremetal-provider Updated

1.40.11

bm-collective Updated

base-2-27-alpine-20240523143803

cluster-api-provider-baremetal Updated

1.40.11

ironic Updated

antelope-jammy-20240522120643

ironic-inspector Updated

antelope-jammy-20240522120643

ironic-prometheus-exporter

0.1-20240117102150

kaas-ipam Updated

base-2-27-alpine-20240531082457

kubernetes-entrypoint

v1.0.1-ba8ada4-20240405150338

mariadb

10.6.17-focal-20240523075821

metallb-controller Updated

v0.14.5-e86184d9-amd64

metallb-speaker Updated

v0.14.5-e86184d9-amd64

syslog-ng

base-alpine-20240129163811

Container Cloud

admission-controller Updated

1.40.11

agent-controller Updated

1.40.11

byo-cluster-api-controller Updated

1.40.11

byo-credentials-controller Removed

n/a

ceph-kcc-controller Updated

1.40.11

cert-manager-controller

1.11.0-6

cinder-csi-plugin

1.27.2-16

client-certificate-controller Updated

1.40.11

configuration-collector Updated

1.40.11

csi-attacher

4.2.0-5

csi-node-driver-registrar

2.7.0-5

csi-provisioner

3.4.1-5

csi-resizer

1.7.0-5

csi-snapshotter

6.2.1-mcc-4

event-controller Updated

1.40.11

frontend Updated

1.40.12

golang

1.21.7-alpine3.18

iam-controller Updated

1.40.11

kaas-exporter Updated

1.40.11

kproxy Updated

1.40.11

lcm-controller Updated

1.40.11

license-controller Updated

1.40.11

livenessprobe Updated

2.9.0-5

machinepool-controller Updated

1.40.11

mcc-haproxy Updated

0.25.0-37-gc15c97d

metrics-server

0.6.3-7

nginx Updated

1.40.11

policy-controller New

1.40.11

portforward-controller Updated

1.40.11

proxy-controller Updated

1.40.11

rbac-controller Updated

1.40.11

registry

2.8.1-9

release-controller Updated

1.40.11

rhellicense-controller Removed

n/a

scope-controller Updated

1.40.11

secret-controller New

1.40.11

storage-discovery Updated

1.40.11

user-controller Updated

1.40.11

IAM

iam Updated

1.40.11

mariadb

10.6.17-focal-20240523075821

mcc-keycloak Updated

24.0.3-20240527150505

OpenStack Updated

host-os-modules-controller Updated

1.40.11

openstack-cloud-controller-manager

v1.27.2-16

openstack-cluster-api-controller

1.40.11

openstack-provider

1.40.11

os-credentials-controller

1.40.11

VMware vSphere

mcc-keepalived Updated

0.25.0-37-gc15c97d

squid-proxy

0.0.1-10-g24a0d69

vsphere-cloud-controller-manager

v1.27.0-6

vsphere-cluster-api-controller Updated

1.40.11

vsphere-credentials-controller Updated

1.40.11

vsphere-csi-driver

v3.0.2-1

vsphere-csi-syncer

v3.0.2-1

vsphere-provider Updated

1.40.11

vsphere-vm-template-controller Updated

1.40.11

Artifacts

This section lists the artifacts of components included in the Container Cloud release 2.27.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20240517093708

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20240517093708

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.40.11.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.40.11.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.40.11.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.40.11.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.40.11.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.40.11.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.40.11.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.40.11

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-27-alpine-20240523143049

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-27-alpine-20240523142757

bm-collective Updated

mirantis.azurecr.io/bm/bm-collective:base-2-27-alpine-20240523143803

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.40.11

ironic Updated

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240522120643

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:antelope-jammy-20240522120643

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240117102150

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-2-27-alpine-20240531082457

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-ba8ada4-20240405150338

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240523075821

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.25.0-37-gc15c97d

metallb-controller Updated

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-e86184d9-amd64

metallb-speaker Updated

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-e86184d9-amd64

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20240129163811

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.40.11.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.40.11.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.40.11.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.40.11.tgz

byo-credentials-controller Removed

n/a

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.40.11.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.40.11.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.40.11.tgz

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.40.11.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.40.11.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.40.11.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.40.11.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.40.11.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.40.11.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.40.11.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.40.11.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.40.12.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.40.11.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.40.11.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.40.11.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.40.11.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.40.11.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.40.11.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.40.11.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.40.11.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.40.11.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.40.11.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.40.11.tgz

proxy-controller Removed

n/a

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.40.11.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.40.11.tgz

rhellicense-controller Removed

n/a

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.40.11.tgz

secret-controller New

https://binary.mirantis.com/core/helm/secret-controller-1.40.11.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.40.11.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.40.11.tgz

vsphere-cloud-controller-manager

https://binary.mirantis.com/core/helm/vsphere-cloud-controller-manager-1.40.11.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.40.11.tgz

vsphere-csi-plugin

https://binary.mirantis.com/core/helm/vsphere-csi-plugin-1.40.11.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.40.11.tgz

vsphere-vm-template-controller

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.40.11.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.40.11

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.40.11

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.40.11

byo-credentials-controller Removed

n/a

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.40.11

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-6

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-16

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.40.11

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.40.11

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-5

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-5

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-5

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-5

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-4

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.40.11

frontend Updated

mirantis.azurecr.io/core/frontend:1.40.12

host-os-modules-controller Updated

mirantis.azurecr.io/core/host-os-modules-controller:1.40.11

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.40.11

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.40.11

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.40.11

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.40.11

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.40.11

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-5

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.40.11

mcc-haproxy Updated

mirantis.azurecr.io/lcm/mcc-haproxy:v0.25.0-37-gc15c97d

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.25.0-37-gc15c97d

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-7

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.40.11

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-16

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.40.11

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.40.11

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.40.11

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.40.11

proxy-controller Removed

n/a

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.40.11

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-9

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.40.11

rhellicense-controller Removed

n/a

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.40.11

secret-controller New

mirantis.azurecr.io/core/secret-controller:1.40.11

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.40.11

vsphere-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/vsphere-cloud-controller-manager:v1.27.0-6

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.40.11

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.40.11

vsphere-csi-driver

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-driver:v3.0.2-1

vsphere-csi-syncer

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-syncer:v3.0.2-1

vsphere-vm-template-controller Updated

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.40.11

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/core/helm/iam-1.40.11.tgz

Docker images

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.22-20240501023013

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-ba8ada4-20240405150338

mariadb

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240523075821

mcc-keycloak Updated

mirantis.azurecr.io/iam/mcc-keycloak:24.0.3-20240527150505

Security notes

In total, since Container Cloud 2.26.0, in 2.27.0, 408 Common Vulnerabilities and Exposures (CVE) have been fixed: 26 of critical and 382 of high severity.

The table below includes the total numbers of addressed unique and common vulnerabilities and exposures (CVE) by product component since the 2.26.5 patch release. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Kaas core

Unique

0

7

7

Common

0

13

13

StackLight

Unique

4

14

18

Common

4

25

29

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.2: Security notes.

Update notes

This section describes the specific actions you as a cloud operator need to complete before or after your Container Cloud cluster update to the Cluster releases 17.2.0 or 16.2.0.

Consider this information as a supplement to the generic update procedures published in Operations Guide: Automatic upgrade of a management cluster and Update a managed cluster.

Updated scheme for patch Cluster releases

Starting from Container Cloud 2.26.5, Mirantis introduces a new update scheme allowing for the update path flexibility. For details, see Patch update schemes before and since 2.26.5. For details on MOSK update scheme, refer to MOSK documentation: Update notes.

For those clusters that update between only major versions, the update scheme remains unchaged.

Caution

In Container Cloud patch releases 2.27.1 and 2.27.2, only the 16.2.x patch Cluster releases will be delivered with an automatic update of management clusters and the possibility to update non-MOSK managed clusters.

In parallel, 2.27.1 and 2.27.2 will include new 16.1.x and 17.1.x patches for MOSK 24.1.x. And the first 17.2.x patch Cluster release for MOSK 24.2.x will be delivered in 2.27.3. For details, see MOSK documentation: Update path for 24.1 and 24.2 series.

Pre-update actions
Update bird configuration on BGP-enabled bare metal clusters

Note

If you have already completed the below procedure after updating your clusters to Container Cloud 2.26.0 (Cluster releases 17.1.0 or 16.1.0), skip this subsection.

Container Cloud 2.26.0 introduced the bird daemon update from v1.6.8 to v2.0.7 on master nodes if BGP is used for BGP announcement of the cluster API load balancer address.

Configuration files for bird v1.x are not fully compatible with those for bird v2.x. Therefore, if you used BGP announcement of cluster API LB address on a deployment based on Cluster releases 17.0.0 or 16.0.0, update bird configuration files to fit bird v2.x using configuration examples provided in the API Reference: MultirRackCluster section.

Review and adjust the storage parameters for OpenSearch

Note

If you have already completed the below procedure after updating your clusters to Container Cloud 2.26.0 (Cluster releases 17.1.0 or 16.1.0), skip this subsection.

To prevent underused or overused storage space, review your storage space parameters for OpenSearch on the StackLight cluster:

  1. Review the value of elasticsearch.persistentVolumeClaimSize and the real storage available on volumes.

  2. Decide whether you have to additionally set elasticsearch.persistentVolumeUsableStorageSizeGB.

For both parameters description, see MOSK Operations Guide: StackLight configuration parameters - OpenSearch.

Post-update actions
Prepare for changing label values in Ceph metrics used in customizations

Note

If you do not use Ceph metrics in any customizations, for example, custom alerts, Grafana dashboards, or queries in custom workloads, skip this section.

After deprecating the performance metric exporter that is integrated into the Ceph Manager daemon for the sake of the dedicated Ceph Exporter daemon in Container Cloud 2.27.0, you may need to prepare for updating values of several labels in Ceph metrics if you use them in any customizations such as custom alerts, Grafana dashboards, or queries in custom tools. These labels will be changed in Container Cloud 2.28.0 (Cluster releases 16.3.0 and 17.3.0).

Note

Names of metrics will not be changed, no metrics will be removed.

All Ceph metrics to be collected by the Ceph Exporter daemon will change their labels job and instance due to scraping metrics from new Ceph Exporter daemon instead of the performance metric exporter of Ceph Manager:

  • Values of the job labels will be changed from rook-ceph-mgr to prometheus-rook-exporter for all Ceph metrics moved to Ceph Exporter. The full list of moved metrics is presented below.

  • Values of the instance labels will be changed from the metric endpoint of Ceph Manager with port 9283 to the metric endpoint of Ceph Exporter with port 9926 for all Ceph metrics moved to Ceph Exporter. The full list of moved metrics is presented below.

  • Values of the instance_id labels of Ceph metrics from the RADOS Gateway (RGW) daemons will be changed from the daemon GID to the daemon subname. For example, instead of instance_id="<RGW_PROCESS_GID>", the instance_id="a" (ceph_rgw_qlen{instance_id="a"}) will be used. The list of moved Ceph RGW metrics is presented below.

List of affected Ceph RGW metrics
  • ceph_rgw_cache_.*

  • ceph_rgw_failed_req

  • ceph_rgw_gc_retire_object

  • ceph_rgw_get.*

  • ceph_rgw_keystone_.*

  • ceph_rgw_lc_.*

  • ceph_rgw_lua_.*

  • ceph_rgw_pubsub_.*

  • ceph_rgw_put.*

  • ceph_rgw_qactive

  • ceph_rgw_qlen

  • ceph_rgw_req

List of all metrics to be collected by Ceph Exporter instead of Ceph Manager
  • ceph_bluefs_.*

  • ceph_bluestore_.*

  • ceph_mds_cache_.*

  • ceph_mds_caps

  • ceph_mds_ceph_.*

  • ceph_mds_dir_.*

  • ceph_mds_exported_inodes

  • ceph_mds_forward

  • ceph_mds_handle_.*

  • ceph_mds_imported_inodes

  • ceph_mds_inodes.*

  • ceph_mds_load_cent

  • ceph_mds_log_.*

  • ceph_mds_mem_.*

  • ceph_mds_openino_dir_fetch

  • ceph_mds_process_request_cap_release

  • ceph_mds_reply_.*

  • ceph_mds_request

  • ceph_mds_root_.*

  • ceph_mds_server_.*

  • ceph_mds_sessions_.*

  • ceph_mds_slow_reply

  • ceph_mds_subtrees

  • ceph_mon_election_.*

  • ceph_mon_num_.*

  • ceph_mon_session_.*

  • ceph_objecter_.*

  • ceph_osd_numpg.*

  • ceph_osd_op.*

  • ceph_osd_recovery_.*

  • ceph_osd_stat_.*

  • ceph_paxos.*

  • ceph_prioritycache.*

  • ceph_purge.*

  • ceph_rgw_cache_.*

  • ceph_rgw_failed_req

  • ceph_rgw_gc_retire_object

  • ceph_rgw_get.*

  • ceph_rgw_keystone_.*

  • ceph_rgw_lc_.*

  • ceph_rgw_lua_.*

  • ceph_rgw_pubsub_.*

  • ceph_rgw_put.*

  • ceph_rgw_qactive

  • ceph_rgw_qlen

  • ceph_rgw_req

  • ceph_rocksdb_.*

2.26.5

The Container Cloud patch release 2.26.5, which is based on the 2.26.0 major release, provides the following updates:

  • Support for the patch Cluster releases 16.1.5 and 17.1.5 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 24.1.5.

  • Bare metal: update of Ubuntu mirror from 20.04~20240502102020 to 20.04~20240517090228 along with update of minor kernel version from 5.15.0-105-generic to 5.15.0-107-generic.

  • Security fixes for CVEs in images.

  • Bug fixes.

This patch release also supports the latest major Cluster releases 17.1.0 and 16.1.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster release instead.

For main deliverables of the parent Container Cloud release of 2.26.5, refer to 2.26.0.

Security notes

The table below includes the total numbers of addressed unique and common CVEs in images by product component since the Container Cloud 2.26.4 patch release. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

0

1

1

Common

0

3

3

Kaas core

Unique

0

5

5

Common

0

12

12

StackLight

Unique

1

3

4

Common

2

6

8

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.1.5: Security notes.

Addressed issues

The following issues have been addressed in the Container Cloud patch release 2.26.5 along with the patch Cluster releases 17.1.5 and 16.1.5.

  • [42408] [bare metal] Fixed the issue with old versions of system packages, including kernel, remaining on the manager nodes after cluster update.

  • [41540] [LCM] Fixed the issue with lcm-agent failing to grab storage information on a host and leaving lcmmachine.status.hostinfo.hardware empty due to issues with managing physical NVME devices.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.26.4 including the Cluster releases 17.1.5 and 16.1.5.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[46245] Lack of access permissions for HOC and HOCM objects

Fixed in 2.28.0 (17.3.0 and 16.3.0)

When trying to list the HostOSConfigurationModules and HostOSConfiguration custom resources, serviceuser or a user with the global-admin or operator role obtains the access denied error. For example:

kubectl --kubeconfig ~/.kube/mgmt-config get hocm

Error from server (Forbidden): hostosconfigurationmodules.kaas.mirantis.com is forbidden:
User "2d74348b-5669-4c65-af31-6c05dbedac5f" cannot list resource "hostosconfigurationmodules"
in API group "kaas.mirantis.com" at the cluster scope: access denied

Workaround:

  1. Modify the global-admin role by adding a new entry with the following contents to the rules list:

    kubectl edit clusterroles kaas-global-admin
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurationmodules]
      verbs: ['*']
    
  2. For each Container Cloud project, modify the kaas-operator role by adding a new entry with the following contents to the rules list:

    kubectl -n <projectName> edit roles kaas-operator
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurations]
      verbs: ['*']
    
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[41305] DHCP responses are lost between dnsmasq and dhcp-relay pods

Fixed in 2.28.0 (17.3.0 and 16.3.0)

After node maintenance of a management cluster, the newly added nodes may fail to undergo provisioning successfully. The issue relates to new nodes that are in the same L2 domain as the management cluster.

The issue was observed on environments having management cluster nodes configured with a single L2 segment used for all network traffic (PXE and LCM/management networks).

To verify whether the cluster is affected:

Verify whether the dnsmasq and dhcp-relay pods run on the same node in the management cluster:

kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"

Example of system response:

dhcp-relay-7d85f75f76-5vdw2   2/2   Running   2 (36h ago)   36h   10.10.0.122     kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (36h ago)   36h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>

If this is the case, proceed to the workaround below.

Workaround:

  1. Log in to a node that contains kubeconfig of the affected management cluster.

  2. Make sure that at least two management cluster nodes are schedulable:

    kubectl get node
    

    Example of a positive system response:

    NAME                                             STATUS   ROLES    AGE   VERSION
    kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-ad5a6f51-b98f-43c3-91d5-55fed3d0ff21   Ready    master   37h   v1.27.10-mirantis-1
    
  3. Delete the dhcp-relay pod:

    kubectl -n kaas delete pod <dhcp-relay-xxxxx>
    
  4. Verify that the dnsmasq and dhcp-relay pods are scheduled into different nodes:

    kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"
    

    Example of a positive system response:

    dhcp-relay-7d85f75f76-rkv03   2/2   Running   0             49s   10.10.0.121     kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   <none>   <none>
    dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (37h ago)   37h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


LCM
[39437] Failure to replace a master node on a Container Cloud cluster

Fixed in 2.29.0 (17.4.0 and 16.4.0)

During the replacement of a master node on a cluster of any type, the process may get stuck with Kubelet's NodeReady condition is Unknown in the machine status on the remaining master nodes.

As a workaround, log in on the affected node and run the following command:

docker restart ucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>

Ceph
[41819] Graceful cluster reboot is blocked by the Ceph ClusterWorkloadLocks

Fixed in 2.27.0 (17.2.0 and 16.2.0)

During graceful reboot of a cluster with Ceph enabled, the reboot is blocked with the following message in the MiraCephMaintenance object status:

message: ClusterMaintenanceRequest found, Ceph Cluster is not ready to upgrade,
 delaying cluster maintenance

As a workaround, add the following snippet to the cephFS section under metadataServer in the spec section of <kcc-name>.yaml in the Ceph cluster:

cephClusterSpec:
  sharedFilesystem:
    cephFS:
    - name: cephfs-store
      metadataServer:
        activeCount: 1
        healthCheck:
          livenessProbe:
            probe:
              failureThreshold: 5
              initialDelaySeconds: 30
              periodSeconds: 30
              successThreshold: 1
              timeoutSeconds: 5
[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.


StackLight
[42304] Failure of shard relocation in the OpenSearch cluster

Fixed in 17.2.0, 16.2.0, 17.1.6, 16.1.6

On large managed clusters, shard relocation may fail in the OpenSearch cluster with the yellow or red status of the OpenSearch cluster. The characteristic symptom of the issue is that in the stacklight namespace, the statefulset.apps/opensearch-master containers are experiencing throttling with the KubeContainersCPUThrottlingHigh alert firing for the following set of labels:

{created_by_kind="StatefulSet",created_by_name="opensearch-master",namespace="stacklight"}

Caution

The throttling that OpenSearch is experiencing may be a temporary situation, which may be related, for example, to a peaky load and the ongoing shards initialization as part of disaster recovery or after node restart. In this case, Mirantis recommends waiting until initialization of all shards is finished. After that, verify the cluster state and whether throttling still exists. And only if throttling does not disappear, apply the workaround below.

To verify that the initialization of shards is ongoing:

kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash

curl "http://localhost:9200/_cat/shards" | grep INITIALIZING

Example of system response:

.ds-system-000072    2 r INITIALIZING    10.232.182.135 opensearch-master-1
.ds-system-000073    1 r INITIALIZING    10.232.7.145   opensearch-master-2
.ds-system-000073    2 r INITIALIZING    10.232.182.135 opensearch-master-1
.ds-audit-000001     2 r INITIALIZING    10.232.7.145   opensearch-master-2

The system response above indicates that shards from the .ds-system-000072, .ds-system-000073, and .ds-audit-000001 indicies are in the INITIALIZING state. In this case, Mirantis recommends waiting until this process is finished, and only then consider changing the limit.

You can additionally analyze the exact level of throttling and the current CPU usage on the Kubernetes Containers dashboard in Grafana.

Workaround:

  1. Verify the currently configured CPU requests and limits for the opensearch containers:

    kubectl -n stacklight get statefulset.apps/opensearch-master -o jsonpath="{.spec.template.spec.containers[?(@.name=='opensearch')].resources}"
    

    Example of system response:

    {"limits":{"cpu":"600m","memory":"8Gi"},"requests":{"cpu":"500m","memory":"6Gi"}}
    

    In the example above, the CPU request is 500m and the CPU limit is 600m.

  2. Increase the CPU limit to a reasonably high number.

    For example, the default CPU limit for the clusters with the clusterSize:large parameter set was increased from 8000m to 12000m for StackLight in Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0).

    Note

    For details, on the clusterSize parameter, see MOSK Operations Guide: StackLight configuration parameters - Cluster size.

    If the defaults are already overridden on the affected cluster using the resourcesPerClusterSize or resources parameters as described in MOSK Operations Guide: StackLight configuration parameters - Resource limits, then the exact recommended number depends on the currently set limit.

    Mirantis recommends increasing the limit by 50%. If it does not resolve the issue, another increase iteration will be required.

  3. When you select the required CPU limit, increase it as described in MOSK Operations Guide: StackLight configuration parameters - Resource limits.

    If the CPU limit for the opensearch component is already set, increase it in the Cluster object for the opensearch parameter. Otherwise, the default StackLight limit is used. In this case, increase the CPU limit for the opensearch component using the resources parameter.

  4. Wait until all opensearch-master pods are recreated with the new CPU limits and become running and ready.

    To verify the current CPU limit for every opensearch container in every opensearch-master pod separately:

    kubectl -n stacklight get pod/opensearch-master-<podSuffixNumber> -o jsonpath="{.spec.containers[?(@.name=='opensearch')].resources}"
    

    In the command above, replace <podSuffixNumber> with the name of the pod suffix. For example, pod/opensearch-master-0 or pod/opensearch-master-2.

    Example of system response:

    {"limits":{"cpu":"900m","memory":"8Gi"},"requests":{"cpu":"500m","memory":"6Gi"}}
    

    The waiting time may take up to 20 minutes depending on the cluster size.

If the issue is fixed, the KubeContainersCPUThrottlingHigh alert stops firing immediately, while OpenSearchClusterStatusWarning or OpenSearchClusterStatusCritical can still be firing for some time during shard relocation.

If the KubeContainersCPUThrottlingHigh alert is still firing, proceed with another iteration of the CPU limit increase.

[40020] Rollover policy update is not appllied to the current index

Fixed in 17.2.0, 16.2.0, 17.1.6, 16.1.6

While updating rollover_policy for the current system* and audit* data streams, the update is not applied to indices.

One of indicators that the cluster is most likely affected is the KubeJobFailed alert firing for the elasticsearch-curator job and one or both of the following errors being present in elasticsearch-curator pods that remain in the Error status:

2024-05-31 13:16:04,459 ERROR   Failed to complete action: delete_indices.  <class 'curator.exceptions.FailedExecution'>: Exception encountered.  Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. Exception: RequestError(400, 'illegal_argument_exception', 'index [.ds-audit-000001] is the write index for data stream [audit] and cannot be deleted')

or

2024-05-31 13:16:04,459 ERROR   Failed to complete action: delete_indices.  <class 'curator.exceptions.FailedExecution'>: Exception encountered.  Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. Exception: RequestError(400, 'illegal_argument_exception', 'index [.ds-system-000001] is the write index for data stream [system] and cannot be deleted')

Note

Instead of .ds-audit-000001 or .ds-system-000001 index names, similar names can be present with the same prefix but different suffix numbers.

If the above mentioned alert and errors are present, an immediate action is required, because it indicates that the corresponding index size has already exceeded the space allocated for the index.

To verify that the cluster is affected:

Caution

Verify and apply the workaround to both index patterns, system and audit, separately.

If one of indices is affected, the second one is most likely affected as well. Although in rare cases, only one index may be affected.

  1. Log in to the opensearch-master-0 Pod:

    kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash
    
  2. Verify that the rollover policy is present:

    • system:

      curl localhost:9200/_plugins/_ism/policies/system_rollover_policy
      
    • audit:

      curl localhost:9200/_plugins/_ism/policies/audit_rollover_policy
      

    The cluster is affected if the rollover policy is missing. Otherwise, proceed to the following step.

  3. Verify the system response from the previous step. For example:

    {"_id":"system_rollover_policy","_version":7229,"_seq_no":42362,"_primary_term":28,"policy":{"policy_id":"system_rollover_policy","description":"system index rollover policy.","last_updated_time":1708505222430,"schema_version":19,"error_notification":null,"default_state":"rollover","states":[{"name":"rollover","actions":[{"retry":{"count":3,"backoff":"exponential","delay":"1m"},"rollover":{"min_size":"14746mb","copy_alias":false}}],"transitions":[]}],"ism_template":[{"index_patterns":["system*"],"priority":200,"last_updated_time":1708505222430}]}}
    

    Verify and capture the following items separately for every policy:

    • The _seq_no and _primary_term values

    • The rollover policy threshold, which is defined in policy.states[0].actions[0].rollover.min_size

  4. List indices:

    • system:

      curl localhost:9200/_cat/indices | grep system
      

      Example of system response:

      [...]
      green open .ds-system-000001   FjglnZlcTKKfKNbosaE9Aw 2 1 1998295  0   1gb 507.9mb
      
    • audit:

      curl localhost:9200/_cat/indices | grep audit
      

      Example of system response:

      [...]
      green open .ds-audit-000001   FjglnZlcTKKfKNbosaE9Aw 2 1 1998295  0   1gb 507.9mb
      
  5. Select the index with the highest number and verify the rollover policy attached to the index:

    • system:

      curl localhost:9200/_plugins/_ism/explain/.ds-system-000001
      
    • audit:

      curl localhost:9200/_plugins/_ism/explain/.ds-audit-000001
      
    • If the rollover policy is not attached, the cluster is affected.

    • If the rollover policy is attached but _seq_no and _primary_term numbers do not match the previously captured ones, the cluster is affected.

    • If the index size drastically exceeds the defined threshold of the rollover policy (which is the previously captured min_size), the cluster is most probably affected.

Workaround:

  1. Log in to the opensearch-master-0 Pod:

    kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash
    
  2. If the policy is attached to the index but has different _seq_no and _primary_term, remove the policy from the index:

    Note

    Use the index with the highest number in the name, which was captured during verification procedure.

    • system:

      curl -XPOST localhost:9200/_plugins/_ism/remove/.ds-system-000001
      
    • audit:

      curl -XPOST localhost:9200/_plugins/_ism/remove/.ds-audit-000001
      
  3. Re-add the policy:

    • system:

      curl -XPOST -H "Content-type: application/json" localhost:9200/_plugins/_ism/add/system* -d'{"policy_id":"system_rollover_policy"}'
      
    • audit:

      curl -XPOST -H "Content-type: application/json" localhost:9200/_plugins/_ism/add/audit* -d'{"policy_id":"audit_rollover_policy"}'
      
  4. Perform again the last step of the cluster verification procedure provided above and make sure that the policy is attached to the index and has the same _seq_no and _primary_term.

    If the index size drastically exceeds the defined threshold of the rollover policy (which is the previously captured min_size), wait up to 15 minutes and verify that the additional index is created with the consecutive number in the index name. For example:

    • system: if you applied changes to .ds-system-000001, wait until .ds-system-000002 is created.

    • audit: if you applied changes to .ds-audit-000001, wait until .ds-audit-000002 is created.

    If such index is not created, escalate the issue to Mirantis support.

Update notes

This section describes the specific actions you as a cloud operator need to complete before or after your Container Cloud cluster update to the Cluster releases 17.1.5 or 16.1.5.

Consider this information as a supplement to the generic update procedures published in Operations Guide: Automatic upgrade of a management cluster and Update a managed cluster.

Update scheme for patch Cluster releases

To improve user update experience and make the update path more flexible, Container Cloud is introducing a new scheme of updating between patch Cluster releases. More specifically, Container Cloud intends to ultimately provide a possibility to update to any newer patch version within single series at any point of time. The patch version downgrade is not supported.

Though, in some cases, Mirantis may request to update to some specific patch version in the series to be able to update to the next major series. This may be necessary due to the specifics of technical content already released or planned for the release. For possible update paths in MOSK in 24.1 and 24.2 series, see MOSK documentation: Cluster update scheme.

The exact number of patch releases for the 16.1.x and 17.1.x series is yet to be confirmed, but the current target is 7 releases.

Note

The management cluster update scheme remains the same. A management cluster obtains the new product version automatically after release.

Post-update actions
Delete ‘HostOSConfiguration’ objects on baremetal-based clusters

If you use the HostOSConfiguration and HostOSConfigurationModules custom resources for the bare metal provider, which are available in the Technology Preview scope in Container Cloud 2.26.x, delete all HostOSConfiguration objects right after update of your managed cluster to the Cluster release 17.1.5 or 16.1.5, before automatic upgrade of the management cluster to Container Cloud 2.27.0 (Cluster release 16.2.0). After the upgrade, you can recreate the required objects using the updated parameters.

This precautionary step prevents re-processing and re-applying of existing configuration, which is defined in HostOSConfiguration objects, during management cluster upgrade to 2.27.0. Such behavior is caused by changes in the HostOSConfiguration API introduced in 2.27.0.

Configure Kubernetes auditing and profiling for log rotation

Note

Skip this procedure if you have already completed it after updating your managed cluster to Container Cloud 2.26.4 (Cluster release 17.1.4 or 16.1.4).

After the MKE update to 3.7.8, if you are going to enable or already enabled Kubernetes auditing and profiling on your managed or management cluster, keep in mind that enabling audit log rotation requires an additional step. Set the following options in the MKE configuration file after enabling auditing and profiling:

[cluster_config]
  kube_api_server_audit_log_maxage=30
  kube_api_server_audit_log_maxbackup=10
  kube_api_server_audit_log_maxsize=10

For the configuration procedure, see MKE documentation: Configure an existing MKE cluster.

While using this procedure, replace the command to upload the newly edited MKE configuration file with the following one:

curl --silent --insecure -X PUT -H "X-UCP-Allow-Restricted-API: i-solemnly-swear-i-am-up-to-no-good" -H "accept: application/toml" -H "Authorization: Bearer $AUTHTOKEN" --upload-file 'mke-config.toml' https://$MKE_HOST/api/ucp/config-toml
  • The value for MKE_HOST has the <loadBalancerHost>:6443 format, where loadBalancerHost is the corresponding field in the cluster status.

  • The value for MKE_PASSWORD is taken from the ucp-admin-password-<clusterName> secret in the cluster namespace of the management cluster.

  • The value for MKE_USERNAME is always admin.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.26.5. For artifacts of the Cluster releases introduced in 2.26.5, see patch Cluster releases 17.1.5 and 16.1.5.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries Updated

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20240517093708

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20240517093708

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.39.28.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.39.28.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.39.28.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.39.28.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.39.28.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.39.28.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.39.28.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.39.28

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-26-alpine-20240523095922

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-26-alpine-20240523095601

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-2-26-alpine-20240408142218

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.39.28

ironic Updated

mirantis.azurecr.io/openstack/ironic:yoga-jammy-20240522120640

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:yoga-jammy-20240522120640

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240117102150

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-2-26-alpine-20240408150853

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-ba8ada4-20240405150338

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240523075821

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.24.0-47-gf77368e

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.13.12-ef4c9453-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.13.12-ef4c9453-amd64

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20240129163811

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.39.28.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.39.28.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.39.28.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.39.28.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.39.28.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.39.28.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.39.28.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.39.28.tgz

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.39.28.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.39.28.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.39.28.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.39.28.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.39.28.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.39.28.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.39.28.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.39.28.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.39.28.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.39.28.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.39.28.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.39.28.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.39.28.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.39.28.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.39.28.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.39.28.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.39.28.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.39.28.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.39.28.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.39.28.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.39.28.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.39.28.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.39.28.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.39.28.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.39.28.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.39.28.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.39.28.tgz

vsphere-cloud-controller-manager

https://binary.mirantis.com/core/helm/vsphere-cloud-controller-manager-1.39.28.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.39.28.tgz

vsphere-csi-plugin

https://binary.mirantis.com/core/helm/vsphere-csi-plugin-1.39.28.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.39.28.tgz

vsphere-vm-template-controller

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.39.28.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.39.28

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.39.28

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.39.28

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.39.28

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.39.28

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-6

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-16

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.39.28

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.39.28

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-5

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-5

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-5

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-5

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-4

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.39.28

frontend Updated

mirantis.azurecr.io/core/frontend:1.39.28

host-os-modules-controller Updated

mirantis.azurecr.io/core/host-os-modules-controller:1.39.28

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.39.28

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.39.28

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.39.28

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.39.28

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.39.28

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-5

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.39.28

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.24.0-47-gf77368e

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.24.0-47-gf77368e

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-7

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.39.28

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-16

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.39.28

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.39.28

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.39.28

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.39.28

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.39.28

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.39.28

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-9

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.39.28

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.39.28

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.39.28

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.39.28

vsphere-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/vsphere-cloud-controller-manager:v1.27.0-6

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.39.28

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.39.28

vsphere-csi-driver

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-driver:v3.0.2-1

vsphere-csi-syncer

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-syncer:v3.0.2-1

vsphere-vm-template-controller Updated

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.39.28

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts

iam Updated

https://binary.mirantis.com/core/helm/iam-1.39.28.tgz

Docker images

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.22-20240501023013

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-ba8ada4-20240405150338

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240523075821

mcc-keycloak

mirantis.azurecr.io/iam/mcc-keycloak:23.0.6-20240216125244

See also

Patch releases

2.26.4

The Container Cloud patch release 2.26.4, which is based on the 2.26.0 major release, provides the following updates:

  • Support for the patch Cluster releases 16.1.4 and 17.1.4 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 24.1.4.

  • Support for MKE 3.7.8.

  • Bare metal: update of Ubuntu mirror from 20.04~20240411171541 to 20.04~20240502102020 along with update of minor kernel version from 5.15.0-102-generic to 5.15.0-105-generic.

  • Security fixes for CVEs in images.

  • Bug fixes.

This patch release also supports the latest major Cluster releases 17.1.0 and 16.1.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster release instead.

For main deliverables of the parent Container Cloud release of 2.26.4, refer to 2.26.0.

Security notes

The table below includes the total numbers of addressed unique and common CVEs in images by product component since the Container Cloud 2.26.3 patch release. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

0

1

1

Common

0

3

3

StackLight

Unique

2

8

10

Common

6

9

15

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.1.4: Security notes.

Addressed issues

The following issues have been addressed in the Container Cloud patch release 2.26.4 along with the patch Cluster releases 17.1.4 and 16.1.4.

  • [41806] [Container Cloud web UI] Fixed the issue with failure to configure management cluster using the Configure cluster web UI menu without updating the Keycloak Truststore settings.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.26.4 including the Cluster releases 17.1.4 and 16.1.4.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[46245] Lack of access permissions for HOC and HOCM objects

Fixed in 2.28.0 (17.3.0 and 16.3.0)

When trying to list the HostOSConfigurationModules and HostOSConfiguration custom resources, serviceuser or a user with the global-admin or operator role obtains the access denied error. For example:

kubectl --kubeconfig ~/.kube/mgmt-config get hocm

Error from server (Forbidden): hostosconfigurationmodules.kaas.mirantis.com is forbidden:
User "2d74348b-5669-4c65-af31-6c05dbedac5f" cannot list resource "hostosconfigurationmodules"
in API group "kaas.mirantis.com" at the cluster scope: access denied

Workaround:

  1. Modify the global-admin role by adding a new entry with the following contents to the rules list:

    kubectl edit clusterroles kaas-global-admin
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurationmodules]
      verbs: ['*']
    
  2. For each Container Cloud project, modify the kaas-operator role by adding a new entry with the following contents to the rules list:

    kubectl -n <projectName> edit roles kaas-operator
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurations]
      verbs: ['*']
    
[42408] Kernel is not updated on manager nodes after cluster update

Fixed in 17.1.5 and 16.1.5

After managed cluster update, old versions of system packages, including kernel, may remain on the manager nodes. This issue occurs because the task responsible for updating packages fails to run after updating Ubuntu mirrors.

As a workaround, manually run apt-get upgrade on every manager node after the cluster update but before rebooting the node.

[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[41305] DHCP responses are lost between dnsmasq and dhcp-relay pods

Fixed in 2.28.0 (17.3.0 and 16.3.0)

After node maintenance of a management cluster, the newly added nodes may fail to undergo provisioning successfully. The issue relates to new nodes that are in the same L2 domain as the management cluster.

The issue was observed on environments having management cluster nodes configured with a single L2 segment used for all network traffic (PXE and LCM/management networks).

To verify whether the cluster is affected:

Verify whether the dnsmasq and dhcp-relay pods run on the same node in the management cluster:

kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"

Example of system response:

dhcp-relay-7d85f75f76-5vdw2   2/2   Running   2 (36h ago)   36h   10.10.0.122     kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (36h ago)   36h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>

If this is the case, proceed to the workaround below.

Workaround:

  1. Log in to a node that contains kubeconfig of the affected management cluster.

  2. Make sure that at least two management cluster nodes are schedulable:

    kubectl get node
    

    Example of a positive system response:

    NAME                                             STATUS   ROLES    AGE   VERSION
    kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-ad5a6f51-b98f-43c3-91d5-55fed3d0ff21   Ready    master   37h   v1.27.10-mirantis-1
    
  3. Delete the dhcp-relay pod:

    kubectl -n kaas delete pod <dhcp-relay-xxxxx>
    
  4. Verify that the dnsmasq and dhcp-relay pods are scheduled into different nodes:

    kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"
    

    Example of a positive system response:

    dhcp-relay-7d85f75f76-rkv03   2/2   Running   0             49s   10.10.0.121     kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   <none>   <none>
    dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (37h ago)   37h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


LCM
[41540] LCM Agent cannot grab storage information on a host

Fixed in 17.1.5 and 16.1.5

Due to issues with managing physical NVME devices, lcm-agent cannot grab storage information on a host. As a result, lcmmachine.status.hostinfo.hardware is empty and the following example error is present in logs:

{"level":"error","ts":"2024-05-02T12:26:10Z","logger":"agent", \
"msg":"get hardware details", \
"host":"kaas-node-548b2861-aed0-41c9-8ff2-10c5476b000b", \
"error":"new storage info: get disk info \"nvme0c0n1\": \
invoke command: exit status 1","errorVerbose":"exit status 1

As a workaround, on the affected node, create a symlink for any device indicated in lcm-agent logs. For example:

ln -sfn /dev/nvme0n1 /dev/nvme0c0n1
[39437] Failure to replace a master node on a Container Cloud cluster

Fixed in 2.29.0 (17.4.0 and 16.4.0)

During the replacement of a master node on a cluster of any type, the process may get stuck with Kubelet's NodeReady condition is Unknown in the machine status on the remaining master nodes.

As a workaround, log in on the affected node and run the following command:

docker restart ucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>

Ceph
[41819] Graceful cluster reboot is blocked by the Ceph ClusterWorkloadLocks

Fixed in 2.27.0 (17.2.0 and 16.2.0)

During graceful reboot of a cluster with Ceph enabled, the reboot is blocked with the following message in the MiraCephMaintenance object status:

message: ClusterMaintenanceRequest found, Ceph Cluster is not ready to upgrade,
 delaying cluster maintenance

As a workaround, add the following snippet to the cephFS section under metadataServer in the spec section of <kcc-name>.yaml in the Ceph cluster:

cephClusterSpec:
  sharedFilesystem:
    cephFS:
    - name: cephfs-store
      metadataServer:
        activeCount: 1
        healthCheck:
          livenessProbe:
            probe:
              failureThreshold: 5
              initialDelaySeconds: 30
              periodSeconds: 30
              successThreshold: 1
              timeoutSeconds: 5
[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.


StackLight
[42304] Failure of shard relocation in the OpenSearch cluster

Fixed in 17.2.0, 16.2.0, 17.1.6, 16.1.6

On large managed clusters, shard relocation may fail in the OpenSearch cluster with the yellow or red status of the OpenSearch cluster. The characteristic symptom of the issue is that in the stacklight namespace, the statefulset.apps/opensearch-master containers are experiencing throttling with the KubeContainersCPUThrottlingHigh alert firing for the following set of labels:

{created_by_kind="StatefulSet",created_by_name="opensearch-master",namespace="stacklight"}

Caution

The throttling that OpenSearch is experiencing may be a temporary situation, which may be related, for example, to a peaky load and the ongoing shards initialization as part of disaster recovery or after node restart. In this case, Mirantis recommends waiting until initialization of all shards is finished. After that, verify the cluster state and whether throttling still exists. And only if throttling does not disappear, apply the workaround below.

To verify that the initialization of shards is ongoing:

kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash

curl "http://localhost:9200/_cat/shards" | grep INITIALIZING

Example of system response:

.ds-system-000072    2 r INITIALIZING    10.232.182.135 opensearch-master-1
.ds-system-000073    1 r INITIALIZING    10.232.7.145   opensearch-master-2
.ds-system-000073    2 r INITIALIZING    10.232.182.135 opensearch-master-1
.ds-audit-000001     2 r INITIALIZING    10.232.7.145   opensearch-master-2

The system response above indicates that shards from the .ds-system-000072, .ds-system-000073, and .ds-audit-000001 indicies are in the INITIALIZING state. In this case, Mirantis recommends waiting until this process is finished, and only then consider changing the limit.

You can additionally analyze the exact level of throttling and the current CPU usage on the Kubernetes Containers dashboard in Grafana.

Workaround:

  1. Verify the currently configured CPU requests and limits for the opensearch containers:

    kubectl -n stacklight get statefulset.apps/opensearch-master -o jsonpath="{.spec.template.spec.containers[?(@.name=='opensearch')].resources}"
    

    Example of system response:

    {"limits":{"cpu":"600m","memory":"8Gi"},"requests":{"cpu":"500m","memory":"6Gi"}}
    

    In the example above, the CPU request is 500m and the CPU limit is 600m.

  2. Increase the CPU limit to a reasonably high number.

    For example, the default CPU limit for the clusters with the clusterSize:large parameter set was increased from 8000m to 12000m for StackLight in Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0).

    Note

    For details, on the clusterSize parameter, see MOSK Operations Guide: StackLight configuration parameters - Cluster size.

    If the defaults are already overridden on the affected cluster using the resourcesPerClusterSize or resources parameters as described in MOSK Operations Guide: StackLight configuration parameters - Resource limits, then the exact recommended number depends on the currently set limit.

    Mirantis recommends increasing the limit by 50%. If it does not resolve the issue, another increase iteration will be required.

  3. When you select the required CPU limit, increase it as described in MOSK Operations Guide: StackLight configuration parameters - Resource limits.

    If the CPU limit for the opensearch component is already set, increase it in the Cluster object for the opensearch parameter. Otherwise, the default StackLight limit is used. In this case, increase the CPU limit for the opensearch component using the resources parameter.

  4. Wait until all opensearch-master pods are recreated with the new CPU limits and become running and ready.

    To verify the current CPU limit for every opensearch container in every opensearch-master pod separately:

    kubectl -n stacklight get pod/opensearch-master-<podSuffixNumber> -o jsonpath="{.spec.containers[?(@.name=='opensearch')].resources}"
    

    In the command above, replace <podSuffixNumber> with the name of the pod suffix. For example, pod/opensearch-master-0 or pod/opensearch-master-2.

    Example of system response:

    {"limits":{"cpu":"900m","memory":"8Gi"},"requests":{"cpu":"500m","memory":"6Gi"}}
    

    The waiting time may take up to 20 minutes depending on the cluster size.

If the issue is fixed, the KubeContainersCPUThrottlingHigh alert stops firing immediately, while OpenSearchClusterStatusWarning or OpenSearchClusterStatusCritical can still be firing for some time during shard relocation.

If the KubeContainersCPUThrottlingHigh alert is still firing, proceed with another iteration of the CPU limit increase.

[40020] Rollover policy update is not appllied to the current index

Fixed in 17.2.0, 16.2.0, 17.1.6, 16.1.6

While updating rollover_policy for the current system* and audit* data streams, the update is not applied to indices.

One of indicators that the cluster is most likely affected is the KubeJobFailed alert firing for the elasticsearch-curator job and one or both of the following errors being present in elasticsearch-curator pods that remain in the Error status:

2024-05-31 13:16:04,459 ERROR   Failed to complete action: delete_indices.  <class 'curator.exceptions.FailedExecution'>: Exception encountered.  Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. Exception: RequestError(400, 'illegal_argument_exception', 'index [.ds-audit-000001] is the write index for data stream [audit] and cannot be deleted')

or

2024-05-31 13:16:04,459 ERROR   Failed to complete action: delete_indices.  <class 'curator.exceptions.FailedExecution'>: Exception encountered.  Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. Exception: RequestError(400, 'illegal_argument_exception', 'index [.ds-system-000001] is the write index for data stream [system] and cannot be deleted')

Note

Instead of .ds-audit-000001 or .ds-system-000001 index names, similar names can be present with the same prefix but different suffix numbers.

If the above mentioned alert and errors are present, an immediate action is required, because it indicates that the corresponding index size has already exceeded the space allocated for the index.

To verify that the cluster is affected:

Caution

Verify and apply the workaround to both index patterns, system and audit, separately.

If one of indices is affected, the second one is most likely affected as well. Although in rare cases, only one index may be affected.

  1. Log in to the opensearch-master-0 Pod:

    kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash
    
  2. Verify that the rollover policy is present:

    • system:

      curl localhost:9200/_plugins/_ism/policies/system_rollover_policy
      
    • audit:

      curl localhost:9200/_plugins/_ism/policies/audit_rollover_policy
      

    The cluster is affected if the rollover policy is missing. Otherwise, proceed to the following step.

  3. Verify the system response from the previous step. For example:

    {"_id":"system_rollover_policy","_version":7229,"_seq_no":42362,"_primary_term":28,"policy":{"policy_id":"system_rollover_policy","description":"system index rollover policy.","last_updated_time":1708505222430,"schema_version":19,"error_notification":null,"default_state":"rollover","states":[{"name":"rollover","actions":[{"retry":{"count":3,"backoff":"exponential","delay":"1m"},"rollover":{"min_size":"14746mb","copy_alias":false}}],"transitions":[]}],"ism_template":[{"index_patterns":["system*"],"priority":200,"last_updated_time":1708505222430}]}}
    

    Verify and capture the following items separately for every policy:

    • The _seq_no and _primary_term values

    • The rollover policy threshold, which is defined in policy.states[0].actions[0].rollover.min_size

  4. List indices:

    • system:

      curl localhost:9200/_cat/indices | grep system
      

      Example of system response:

      [...]
      green open .ds-system-000001   FjglnZlcTKKfKNbosaE9Aw 2 1 1998295  0   1gb 507.9mb
      
    • audit:

      curl localhost:9200/_cat/indices | grep audit
      

      Example of system response:

      [...]
      green open .ds-audit-000001   FjglnZlcTKKfKNbosaE9Aw 2 1 1998295  0   1gb 507.9mb
      
  5. Select the index with the highest number and verify the rollover policy attached to the index:

    • system:

      curl localhost:9200/_plugins/_ism/explain/.ds-system-000001
      
    • audit:

      curl localhost:9200/_plugins/_ism/explain/.ds-audit-000001
      
    • If the rollover policy is not attached, the cluster is affected.

    • If the rollover policy is attached but _seq_no and _primary_term numbers do not match the previously captured ones, the cluster is affected.

    • If the index size drastically exceeds the defined threshold of the rollover policy (which is the previously captured min_size), the cluster is most probably affected.

Workaround:

  1. Log in to the opensearch-master-0 Pod:

    kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash
    
  2. If the policy is attached to the index but has different _seq_no and _primary_term, remove the policy from the index:

    Note

    Use the index with the highest number in the name, which was captured during verification procedure.

    • system:

      curl -XPOST localhost:9200/_plugins/_ism/remove/.ds-system-000001
      
    • audit:

      curl -XPOST localhost:9200/_plugins/_ism/remove/.ds-audit-000001
      
  3. Re-add the policy:

    • system:

      curl -XPOST -H "Content-type: application/json" localhost:9200/_plugins/_ism/add/system* -d'{"policy_id":"system_rollover_policy"}'
      
    • audit:

      curl -XPOST -H "Content-type: application/json" localhost:9200/_plugins/_ism/add/audit* -d'{"policy_id":"audit_rollover_policy"}'
      
  4. Perform again the last step of the cluster verification procedure provided above and make sure that the policy is attached to the index and has the same _seq_no and _primary_term.

    If the index size drastically exceeds the defined threshold of the rollover policy (which is the previously captured min_size), wait up to 15 minutes and verify that the additional index is created with the consecutive number in the index name. For example:

    • system: if you applied changes to .ds-system-000001, wait until .ds-system-000002 is created.

    • audit: if you applied changes to .ds-audit-000001, wait until .ds-audit-000002 is created.

    If such index is not created, escalate the issue to Mirantis support.

Update notes

This section describes the specific actions you as a cloud operator need to complete before or after your Container Cloud cluster update to the Cluster releases 17.1.4 or 16.1.4.

Consider this information as a supplement to the generic update procedures published in Operations Guide: Automatic upgrade of a management cluster and Update a patch Cluster release of a managed cluster.

Post-update actions
Configure Kubernetes auditing and profiling for log rotation

After the MKE update to 3.7.8, if you are going to enable or already enabled Kubernetes auditing and profiling on your managed or management cluster, keep in mind that enabling audit log rotation requires an additional step. Set the following options in the MKE configuration file after enabling auditing and profiling:

[cluster_config]
  kube_api_server_audit_log_maxage=30
  kube_api_server_audit_log_maxbackup=10
  kube_api_server_audit_log_maxsize=10

For the configuration procedure, see MKE documentation: Configure an existing MKE cluster.

While using this procedure, replace the command to upload the newly edited MKE configuration file with the following one:

curl --silent --insecure -X PUT -H "X-UCP-Allow-Restricted-API: i-solemnly-swear-i-am-up-to-no-good" -H "accept: application/toml" -H "Authorization: Bearer $AUTHTOKEN" --upload-file 'mke-config.toml' https://$MKE_HOST/api/ucp/config-toml
  • The value for MKE_HOST has the <loadBalancerHost>:6443 format, where loadBalancerHost is the corresponding field in the cluster status.

  • The value for MKE_PASSWORD is taken from the ucp-admin-password-<clusterName> secret in the cluster namespace of the management cluster.

  • The value for MKE_USERNAME is always admin.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.26.4. For artifacts of the Cluster releases introduced in 2.26.4, see patch Cluster releases 17.1.4 and 16.1.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries Updated

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20240502103738

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20240502103738

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.39.26.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.39.26.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.39.26.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.39.26.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.39.26.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.39.26.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.39.26.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.39.26

baremetal-dnsmasq

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-26-alpine-20240408141922

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-26-alpine-20240415095355

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-2-26-alpine-20240408142218

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.39.26

ironic Updated

mirantis.azurecr.io/openstack/ironic:yoga-jammy-20240510100941

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:yoga-jammy-20240510100941

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240117102150

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-2-26-alpine-20240408150853

kubernetes-entrypoint Updated

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-ba8ada4-20240405150338

mariadb

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20240311120505

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.24.0-47-gf77368e

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.13.12-ef4c9453-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.13.12-ef4c9453-amd64

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20240129163811

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.39.26.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.39.26.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.39.26.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.39.26.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.39.26.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.39.26.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.39.26.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.39.26.tgz

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.39.26.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.39.26.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.39.26.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.39.26.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.39.26.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.39.26.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.39.26.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.39.26.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.39.26.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.39.26.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.39.26.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.39.26.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.39.26.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.39.26.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.39.26.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.39.26.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.39.26.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.39.26.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.39.26.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.39.26.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.39.26.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.39.26.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.39.26.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.39.26.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.39.26.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.39.26.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.39.26.tgz

vsphere-cloud-controller-manager

https://binary.mirantis.com/core/helm/vsphere-cloud-controller-manager-1.39.26.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.39.26.tgz

vsphere-csi-plugin

https://binary.mirantis.com/core/helm/vsphere-csi-plugin-1.39.26.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.39.26.tgz

vsphere-vm-template-controller

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.39.26.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.39.26

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.39.26

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.39.26

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.39.26

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.39.26

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-6

cinder-csi-plugin Updated

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-16

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.39.26

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.39.26

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-5

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-5

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-5

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-5

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-4

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.39.26

frontend Updated

mirantis.azurecr.io/core/frontend:1.39.26

host-os-modules-controller Updated

mirantis.azurecr.io/core/host-os-modules-controller:1.39.26

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.39.26

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.39.26

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.39.26

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.39.26

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.39.26

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-5

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.39.26

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.24.0-47-gf77368e

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.24.0-47-gf77368e

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-7

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.39.26

openstack-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-16

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.39.26

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.39.26

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.39.26

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.39.26

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.39.26

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.39.26

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-9

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.39.26

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.39.26

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.39.26

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.39.26

vsphere-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/vsphere-cloud-controller-manager:v1.27.0-6

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.39.26

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.39.26

vsphere-csi-driver

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-driver:v3.0.2-1

vsphere-csi-syncer

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-syncer:v3.0.2-1

vsphere-vm-template-controller Updated

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.39.26

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts

iam Updated

https://binary.mirantis.com/core/helm/iam-1.39.26.tgz

Docker images

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240501023013

kubernetes-entrypoint Updated

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-ba8ada4-20240405150338

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.17-focal-20240327104027

mcc-keycloak

mirantis.azurecr.io/iam/mcc-keycloak:23.0.6-20240216125244

See also

Patch releases

2.26.3

The Container Cloud patch release 2.26.3, which is based on the 2.26.0 major release, provides the following updates:

  • Support for the patch Cluster releases 16.1.3 and 17.1.3 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 24.1.3.

  • Support for MKE 3.7.7.

  • Bare metal: update of Ubuntu mirror from 20.04~20240324172903 to 20.04~20240411171541 along with update of minor kernel version from 5.15.0-101-generic to 5.15.0-102-generic.

  • Security fixes for CVEs in images.

  • Bug fixes.

This patch release also supports the latest major Cluster releases 17.1.0 and 16.1.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster release instead.

For main deliverables of the parent Container Cloud release of 2.26.3, refer to 2.26.0.

Security notes

The table below includes the total numbers of addressed unique and common CVEs in images by product component since the Container Cloud 2.26.2 patch release. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

0

1

1

Common

0

10

10

Core

Unique

0

4

4

Common

0

105

105

StackLight

Unique

1

4

5

Common

1

24

25

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.1.3: Security notes.

Addressed issues

The following issues have been addressed in the Container Cloud patch release 2.26.3 along with the patch Cluster releases 17.1.3 and 16.1.3.

  • [40811] [LCM] Fixed the issue with the DaemonSet Pod remaining on the deleted node in the Terminating state during machine deletion.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.26.3 including the Cluster releases 17.1.3 and 16.1.3.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[46245] Lack of access permissions for HOC and HOCM objects

Fixed in 2.28.0 (17.3.0 and 16.3.0)

When trying to list the HostOSConfigurationModules and HostOSConfiguration custom resources, serviceuser or a user with the global-admin or operator role obtains the access denied error. For example:

kubectl --kubeconfig ~/.kube/mgmt-config get hocm

Error from server (Forbidden): hostosconfigurationmodules.kaas.mirantis.com is forbidden:
User "2d74348b-5669-4c65-af31-6c05dbedac5f" cannot list resource "hostosconfigurationmodules"
in API group "kaas.mirantis.com" at the cluster scope: access denied

Workaround:

  1. Modify the global-admin role by adding a new entry with the following contents to the rules list:

    kubectl edit clusterroles kaas-global-admin
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurationmodules]
      verbs: ['*']
    
  2. For each Container Cloud project, modify the kaas-operator role by adding a new entry with the following contents to the rules list:

    kubectl -n <projectName> edit roles kaas-operator
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurations]
      verbs: ['*']
    
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[41305] DHCP responses are lost between dnsmasq and dhcp-relay pods

Fixed in 2.28.0 (17.3.0 and 16.3.0)

After node maintenance of a management cluster, the newly added nodes may fail to undergo provisioning successfully. The issue relates to new nodes that are in the same L2 domain as the management cluster.

The issue was observed on environments having management cluster nodes configured with a single L2 segment used for all network traffic (PXE and LCM/management networks).

To verify whether the cluster is affected:

Verify whether the dnsmasq and dhcp-relay pods run on the same node in the management cluster:

kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"

Example of system response:

dhcp-relay-7d85f75f76-5vdw2   2/2   Running   2 (36h ago)   36h   10.10.0.122     kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (36h ago)   36h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>

If this is the case, proceed to the workaround below.

Workaround:

  1. Log in to a node that contains kubeconfig of the affected management cluster.

  2. Make sure that at least two management cluster nodes are schedulable:

    kubectl get node
    

    Example of a positive system response:

    NAME                                             STATUS   ROLES    AGE   VERSION
    kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-ad5a6f51-b98f-43c3-91d5-55fed3d0ff21   Ready    master   37h   v1.27.10-mirantis-1
    
  3. Delete the dhcp-relay pod:

    kubectl -n kaas delete pod <dhcp-relay-xxxxx>
    
  4. Verify that the dnsmasq and dhcp-relay pods are scheduled into different nodes:

    kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"
    

    Example of a positive system response:

    dhcp-relay-7d85f75f76-rkv03   2/2   Running   0             49s   10.10.0.121     kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   <none>   <none>
    dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (37h ago)   37h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


LCM
[41540] LCM Agent cannot grab storage information on a host

Fixed in 17.1.5 and 16.1.5

Due to issues with managing physical NVME devices, lcm-agent cannot grab storage information on a host. As a result, lcmmachine.status.hostinfo.hardware is empty and the following example error is present in logs:

{"level":"error","ts":"2024-05-02T12:26:10Z","logger":"agent", \
"msg":"get hardware details", \
"host":"kaas-node-548b2861-aed0-41c9-8ff2-10c5476b000b", \
"error":"new storage info: get disk info \"nvme0c0n1\": \
invoke command: exit status 1","errorVerbose":"exit status 1

As a workaround, on the affected node, create a symlink for any device indicated in lcm-agent logs. For example:

ln -sfn /dev/nvme0n1 /dev/nvme0c0n1
[39437] Failure to replace a master node on a Container Cloud cluster

Fixed in 2.29.0 (17.4.0 and 16.4.0)

During the replacement of a master node on a cluster of any type, the process may get stuck with Kubelet's NodeReady condition is Unknown in the machine status on the remaining master nodes.

As a workaround, log in on the affected node and run the following command:

docker restart ucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>

Ceph
[41819] Graceful cluster reboot is blocked by the Ceph ClusterWorkloadLocks

Fixed in 2.27.0 (17.2.0 and 16.2.0)

During graceful reboot of a cluster with Ceph enabled, the reboot is blocked with the following message in the MiraCephMaintenance object status:

message: ClusterMaintenanceRequest found, Ceph Cluster is not ready to upgrade,
 delaying cluster maintenance

As a workaround, add the following snippet to the cephFS section under metadataServer in the spec section of <kcc-name>.yaml in the Ceph cluster:

cephClusterSpec:
  sharedFilesystem:
    cephFS:
    - name: cephfs-store
      metadataServer:
        activeCount: 1
        healthCheck:
          livenessProbe:
            probe:
              failureThreshold: 5
              initialDelaySeconds: 30
              periodSeconds: 30
              successThreshold: 1
              timeoutSeconds: 5
[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.


StackLight
[42304] Failure of shard relocation in the OpenSearch cluster

Fixed in 17.2.0, 16.2.0, 17.1.6, 16.1.6

On large managed clusters, shard relocation may fail in the OpenSearch cluster with the yellow or red status of the OpenSearch cluster. The characteristic symptom of the issue is that in the stacklight namespace, the statefulset.apps/opensearch-master containers are experiencing throttling with the KubeContainersCPUThrottlingHigh alert firing for the following set of labels:

{created_by_kind="StatefulSet",created_by_name="opensearch-master",namespace="stacklight"}

Caution

The throttling that OpenSearch is experiencing may be a temporary situation, which may be related, for example, to a peaky load and the ongoing shards initialization as part of disaster recovery or after node restart. In this case, Mirantis recommends waiting until initialization of all shards is finished. After that, verify the cluster state and whether throttling still exists. And only if throttling does not disappear, apply the workaround below.

To verify that the initialization of shards is ongoing:

kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash

curl "http://localhost:9200/_cat/shards" | grep INITIALIZING

Example of system response:

.ds-system-000072    2 r INITIALIZING    10.232.182.135 opensearch-master-1
.ds-system-000073    1 r INITIALIZING    10.232.7.145   opensearch-master-2
.ds-system-000073    2 r INITIALIZING    10.232.182.135 opensearch-master-1
.ds-audit-000001     2 r INITIALIZING    10.232.7.145   opensearch-master-2

The system response above indicates that shards from the .ds-system-000072, .ds-system-000073, and .ds-audit-000001 indicies are in the INITIALIZING state. In this case, Mirantis recommends waiting until this process is finished, and only then consider changing the limit.

You can additionally analyze the exact level of throttling and the current CPU usage on the Kubernetes Containers dashboard in Grafana.

Workaround:

  1. Verify the currently configured CPU requests and limits for the opensearch containers:

    kubectl -n stacklight get statefulset.apps/opensearch-master -o jsonpath="{.spec.template.spec.containers[?(@.name=='opensearch')].resources}"
    

    Example of system response:

    {"limits":{"cpu":"600m","memory":"8Gi"},"requests":{"cpu":"500m","memory":"6Gi"}}
    

    In the example above, the CPU request is 500m and the CPU limit is 600m.

  2. Increase the CPU limit to a reasonably high number.

    For example, the default CPU limit for the clusters with the clusterSize:large parameter set was increased from 8000m to 12000m for StackLight in Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0).

    Note

    For details, on the clusterSize parameter, see MOSK Operations Guide: StackLight configuration parameters - Cluster size.

    If the defaults are already overridden on the affected cluster using the resourcesPerClusterSize or resources parameters as described in MOSK Operations Guide: StackLight configuration parameters - Resource limits, then the exact recommended number depends on the currently set limit.

    Mirantis recommends increasing the limit by 50%. If it does not resolve the issue, another increase iteration will be required.

  3. When you select the required CPU limit, increase it as described in MOSK Operations Guide: StackLight configuration parameters - Resource limits.

    If the CPU limit for the opensearch component is already set, increase it in the Cluster object for the opensearch parameter. Otherwise, the default StackLight limit is used. In this case, increase the CPU limit for the opensearch component using the resources parameter.

  4. Wait until all opensearch-master pods are recreated with the new CPU limits and become running and ready.

    To verify the current CPU limit for every opensearch container in every opensearch-master pod separately:

    kubectl -n stacklight get pod/opensearch-master-<podSuffixNumber> -o jsonpath="{.spec.containers[?(@.name=='opensearch')].resources}"
    

    In the command above, replace <podSuffixNumber> with the name of the pod suffix. For example, pod/opensearch-master-0 or pod/opensearch-master-2.

    Example of system response:

    {"limits":{"cpu":"900m","memory":"8Gi"},"requests":{"cpu":"500m","memory":"6Gi"}}
    

    The waiting time may take up to 20 minutes depending on the cluster size.

If the issue is fixed, the KubeContainersCPUThrottlingHigh alert stops firing immediately, while OpenSearchClusterStatusWarning or OpenSearchClusterStatusCritical can still be firing for some time during shard relocation.

If the KubeContainersCPUThrottlingHigh alert is still firing, proceed with another iteration of the CPU limit increase.

[40020] Rollover policy update is not appllied to the current index

Fixed in 17.2.0, 16.2.0, 17.1.6, 16.1.6

While updating rollover_policy for the current system* and audit* data streams, the update is not applied to indices.

One of indicators that the cluster is most likely affected is the KubeJobFailed alert firing for the elasticsearch-curator job and one or both of the following errors being present in elasticsearch-curator pods that remain in the Error status:

2024-05-31 13:16:04,459 ERROR   Failed to complete action: delete_indices.  <class 'curator.exceptions.FailedExecution'>: Exception encountered.  Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. Exception: RequestError(400, 'illegal_argument_exception', 'index [.ds-audit-000001] is the write index for data stream [audit] and cannot be deleted')

or

2024-05-31 13:16:04,459 ERROR   Failed to complete action: delete_indices.  <class 'curator.exceptions.FailedExecution'>: Exception encountered.  Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. Exception: RequestError(400, 'illegal_argument_exception', 'index [.ds-system-000001] is the write index for data stream [system] and cannot be deleted')

Note

Instead of .ds-audit-000001 or .ds-system-000001 index names, similar names can be present with the same prefix but different suffix numbers.

If the above mentioned alert and errors are present, an immediate action is required, because it indicates that the corresponding index size has already exceeded the space allocated for the index.

To verify that the cluster is affected:

Caution

Verify and apply the workaround to both index patterns, system and audit, separately.

If one of indices is affected, the second one is most likely affected as well. Although in rare cases, only one index may be affected.

  1. Log in to the opensearch-master-0 Pod:

    kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash
    
  2. Verify that the rollover policy is present:

    • system:

      curl localhost:9200/_plugins/_ism/policies/system_rollover_policy
      
    • audit:

      curl localhost:9200/_plugins/_ism/policies/audit_rollover_policy
      

    The cluster is affected if the rollover policy is missing. Otherwise, proceed to the following step.

  3. Verify the system response from the previous step. For example:

    {"_id":"system_rollover_policy","_version":7229,"_seq_no":42362,"_primary_term":28,"policy":{"policy_id":"system_rollover_policy","description":"system index rollover policy.","last_updated_time":1708505222430,"schema_version":19,"error_notification":null,"default_state":"rollover","states":[{"name":"rollover","actions":[{"retry":{"count":3,"backoff":"exponential","delay":"1m"},"rollover":{"min_size":"14746mb","copy_alias":false}}],"transitions":[]}],"ism_template":[{"index_patterns":["system*"],"priority":200,"last_updated_time":1708505222430}]}}
    

    Verify and capture the following items separately for every policy:

    • The _seq_no and _primary_term values

    • The rollover policy threshold, which is defined in policy.states[0].actions[0].rollover.min_size

  4. List indices:

    • system:

      curl localhost:9200/_cat/indices | grep system
      

      Example of system response:

      [...]
      green open .ds-system-000001   FjglnZlcTKKfKNbosaE9Aw 2 1 1998295  0   1gb 507.9mb
      
    • audit:

      curl localhost:9200/_cat/indices | grep audit
      

      Example of system response:

      [...]
      green open .ds-audit-000001   FjglnZlcTKKfKNbosaE9Aw 2 1 1998295  0   1gb 507.9mb
      
  5. Select the index with the highest number and verify the rollover policy attached to the index:

    • system:

      curl localhost:9200/_plugins/_ism/explain/.ds-system-000001
      
    • audit:

      curl localhost:9200/_plugins/_ism/explain/.ds-audit-000001
      
    • If the rollover policy is not attached, the cluster is affected.

    • If the rollover policy is attached but _seq_no and _primary_term numbers do not match the previously captured ones, the cluster is affected.

    • If the index size drastically exceeds the defined threshold of the rollover policy (which is the previously captured min_size), the cluster is most probably affected.

Workaround:

  1. Log in to the opensearch-master-0 Pod:

    kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash
    
  2. If the policy is attached to the index but has different _seq_no and _primary_term, remove the policy from the index:

    Note

    Use the index with the highest number in the name, which was captured during verification procedure.

    • system:

      curl -XPOST localhost:9200/_plugins/_ism/remove/.ds-system-000001
      
    • audit:

      curl -XPOST localhost:9200/_plugins/_ism/remove/.ds-audit-000001
      
  3. Re-add the policy:

    • system:

      curl -XPOST -H "Content-type: application/json" localhost:9200/_plugins/_ism/add/system* -d'{"policy_id":"system_rollover_policy"}'
      
    • audit:

      curl -XPOST -H "Content-type: application/json" localhost:9200/_plugins/_ism/add/audit* -d'{"policy_id":"audit_rollover_policy"}'
      
  4. Perform again the last step of the cluster verification procedure provided above and make sure that the policy is attached to the index and has the same _seq_no and _primary_term.

    If the index size drastically exceeds the defined threshold of the rollover policy (which is the previously captured min_size), wait up to 15 minutes and verify that the additional index is created with the consecutive number in the index name. For example:

    • system: if you applied changes to .ds-system-000001, wait until .ds-system-000002 is created.

    • audit: if you applied changes to .ds-audit-000001, wait until .ds-audit-000002 is created.

    If such index is not created, escalate the issue to Mirantis support.


Container Cloud web UI
[41806] Configuration of a management cluster fails without Keycloak settings

Fixed in 17.1.4 and 16.1.4

During configuration of a management cluster settings using the Configure cluster web UI menu, updating the Keycloak Truststore settings is mandatory, despite being optional.

As a workaround, update the management cluster using the API or CLI.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.26.3. For artifacts of the Cluster releases introduced in 2.26.3, see patch Cluster releases 17.1.3 and 16.1.3.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries Updated

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20240411174919

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20240411174919

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.39.23.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.39.23.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.39.23.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.39.23.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.39.23.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.39.23.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.39.23.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.39.23

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-26-alpine-20240408141922

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-26-alpine-20240408141703

bm-collective Updated

mirantis.azurecr.io/bm/bm-collective:base-2-26-alpine-20240408142218

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.39.23

ironic

mirantis.azurecr.io/openstack/ironic:yoga-jammy-20240226060024

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:yoga-jammy-20240226060024

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240117102150

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-2-26-alpine-20240408150853

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-55b02f7-20231019172556

mariadb

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20240311120505

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.24.0-47-gf77368e

metallb-controller Updated

mirantis.azurecr.io/bm/metallb/controller:v0.13.12-ef4c9453-amd64

metallb-speaker Updated

mirantis.azurecr.io/bm/metallb/speaker:v0.13.12-ef4c9453-amd64

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20240129163811

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.39.23.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.39.23.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.39.23.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.39.23.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.39.23.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.39.23.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.39.23.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.39.23.tgz

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.39.23.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.39.23.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.39.23.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.39.23.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.39.23.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.39.23.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.39.23.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.39.23.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.39.23.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.39.23.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.39.23.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.39.23.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.39.23.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.39.23.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.39.23.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.39.23.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.39.23.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.39.23.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.39.23.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.39.23.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.39.23.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.39.23.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.39.23.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.39.23.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.39.23.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.39.23.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.39.23.tgz

vsphere-cloud-controller-manager

https://binary.mirantis.com/core/helm/vsphere-cloud-controller-manager-1.39.23.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.39.23.tgz

vsphere-csi-plugin

https://binary.mirantis.com/core/helm/vsphere-csi-plugin-1.39.23.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.39.23.tgz

vsphere-vm-template-controller

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.39.23.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.39.23

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.39.23

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.39.23

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.39.23

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.39.23

cert-manager-controller Updated

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-6

cinder-csi-plugin Updated

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-14

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.39.23

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.39.23

csi-attacher Updated

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-5

csi-node-driver-registrar Updated

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-5

csi-provisioner Updated

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-5

csi-resizer Updated

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-5

csi-snapshotter Updated

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-4

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.39.23

frontend Updated

mirantis.azurecr.io/core/frontend:1.39.23

host-os-modules-controller Updated

mirantis.azurecr.io/core/host-os-modules-controller:1.39.23

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.39.23

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.39.23

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.39.23

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.39.23

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.39.23

livenessprobe Updated

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-5

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.39.23

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.24.0-47-gf77368e

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.24.0-47-gf77368e

metrics-server Updated

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-7

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.39.23

openstack-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-14

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.39.23

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.39.23

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.39.23

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.39.23

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.39.23

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.39.23

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-9

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.39.23

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.39.23

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.39.23

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.39.23

vsphere-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/vsphere-cloud-controller-manager:v1.27.0-6

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.39.23

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.39.23

vsphere-csi-driver

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-driver:v3.0.2-1

vsphere-csi-syncer

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-syncer:v3.0.2-1

vsphere-vm-template-controller Updated

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.39.23

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts

iam Updated

https://binary.mirantis.com/core/helm/iam-1.39.23.tgz

Docker images

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.22-20240221023016

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-55b02f7-20231019172556

mariadb

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20240311120505

mcc-keycloak

mirantis.azurecr.io/iam/mcc-keycloak:23.0.6-20240216125244

See also

Patch releases

2.26.2

The Container Cloud patch release 2.26.2, which is based on the 2.26.0 major release, provides the following updates:

  • Support for the patch Cluster releases 16.1.2 and 17.1.2 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 24.1.2.

  • Support for MKE 3.7.6.

  • Support for docker-ee-cli 23.0.10 in MCR 23.0.9 to fix several CVEs.

  • Bare metal: update of Ubuntu mirror from 20.04~20240302175618 to 20.04~20240324172903 along with update of minor kernel version from 5.15.0-97-generic to 5.15.0-101-generic.

  • Security fixes for CVEs in images.

This patch release also supports the latest major Cluster releases 17.1.0 and 16.1.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster release instead.

For main deliverables of the parent Container Cloud release of 2.26.2, refer to 2.26.0.

Security notes

The table below includes the total numbers of addressed unique and common CVEs in images by product component since the Container Cloud 2.26.1 patch release. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

0

3

3

Common

0

12

12

Kaas core

Unique

1

6

7

Common

1

11

12

StackLight

Unique

0

1

1

Common

0

10

10

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.1.2: Security notes.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.26.2 including the Cluster releases 17.1.2 and 16.1.2.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[46245] Lack of access permissions for HOC and HOCM objects

Fixed in 2.28.0 (17.3.0 and 16.3.0)

When trying to list the HostOSConfigurationModules and HostOSConfiguration custom resources, serviceuser or a user with the global-admin or operator role obtains the access denied error. For example:

kubectl --kubeconfig ~/.kube/mgmt-config get hocm

Error from server (Forbidden): hostosconfigurationmodules.kaas.mirantis.com is forbidden:
User "2d74348b-5669-4c65-af31-6c05dbedac5f" cannot list resource "hostosconfigurationmodules"
in API group "kaas.mirantis.com" at the cluster scope: access denied

Workaround:

  1. Modify the global-admin role by adding a new entry with the following contents to the rules list:

    kubectl edit clusterroles kaas-global-admin
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurationmodules]
      verbs: ['*']
    
  2. For each Container Cloud project, modify the kaas-operator role by adding a new entry with the following contents to the rules list:

    kubectl -n <projectName> edit roles kaas-operator
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurations]
      verbs: ['*']
    
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[41305] DHCP responses are lost between dnsmasq and dhcp-relay pods

Fixed in 2.28.0 (17.3.0 and 16.3.0)

After node maintenance of a management cluster, the newly added nodes may fail to undergo provisioning successfully. The issue relates to new nodes that are in the same L2 domain as the management cluster.

The issue was observed on environments having management cluster nodes configured with a single L2 segment used for all network traffic (PXE and LCM/management networks).

To verify whether the cluster is affected:

Verify whether the dnsmasq and dhcp-relay pods run on the same node in the management cluster:

kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"

Example of system response:

dhcp-relay-7d85f75f76-5vdw2   2/2   Running   2 (36h ago)   36h   10.10.0.122     kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (36h ago)   36h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>

If this is the case, proceed to the workaround below.

Workaround:

  1. Log in to a node that contains kubeconfig of the affected management cluster.

  2. Make sure that at least two management cluster nodes are schedulable:

    kubectl get node
    

    Example of a positive system response:

    NAME                                             STATUS   ROLES    AGE   VERSION
    kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-ad5a6f51-b98f-43c3-91d5-55fed3d0ff21   Ready    master   37h   v1.27.10-mirantis-1
    
  3. Delete the dhcp-relay pod:

    kubectl -n kaas delete pod <dhcp-relay-xxxxx>
    
  4. Verify that the dnsmasq and dhcp-relay pods are scheduled into different nodes:

    kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"
    

    Example of a positive system response:

    dhcp-relay-7d85f75f76-rkv03   2/2   Running   0             49s   10.10.0.121     kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   <none>   <none>
    dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (37h ago)   37h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


LCM
[41540] LCM Agent cannot grab storage information on a host

Fixed in 17.1.5 and 16.1.5

Due to issues with managing physical NVME devices, lcm-agent cannot grab storage information on a host. As a result, lcmmachine.status.hostinfo.hardware is empty and the following example error is present in logs:

{"level":"error","ts":"2024-05-02T12:26:10Z","logger":"agent", \
"msg":"get hardware details", \
"host":"kaas-node-548b2861-aed0-41c9-8ff2-10c5476b000b", \
"error":"new storage info: get disk info \"nvme0c0n1\": \
invoke command: exit status 1","errorVerbose":"exit status 1

As a workaround, on the affected node, create a symlink for any device indicated in lcm-agent logs. For example:

ln -sfn /dev/nvme0n1 /dev/nvme0c0n1
[40811] Pod is stuck in the Terminating state on the deleted node

Fixed in 17.1.3 and 16.1.3

During deletion of a machine, the related DaemonSet Pod can remain on the deleted node in the Terminating state. As a workaround, manually delete the Pod:

kubectl delete pod -n <podNamespace> <podName>
[39437] Failure to replace a master node on a Container Cloud cluster

Fixed in 2.29.0 (17.4.0 and 16.4.0)

During the replacement of a master node on a cluster of any type, the process may get stuck with Kubelet's NodeReady condition is Unknown in the machine status on the remaining master nodes.

As a workaround, log in on the affected node and run the following command:

docker restart ucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>

Ceph
[41819] Graceful cluster reboot is blocked by the Ceph ClusterWorkloadLocks

Fixed in 2.27.0 (17.2.0 and 16.2.0)

During graceful reboot of a cluster with Ceph enabled, the reboot is blocked with the following message in the MiraCephMaintenance object status:

message: ClusterMaintenanceRequest found, Ceph Cluster is not ready to upgrade,
 delaying cluster maintenance

As a workaround, add the following snippet to the cephFS section under metadataServer in the spec section of <kcc-name>.yaml in the Ceph cluster:

cephClusterSpec:
  sharedFilesystem:
    cephFS:
    - name: cephfs-store
      metadataServer:
        activeCount: 1
        healthCheck:
          livenessProbe:
            probe:
              failureThreshold: 5
              initialDelaySeconds: 30
              periodSeconds: 30
              successThreshold: 1
              timeoutSeconds: 5
[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.


StackLight
[42304] Failure of shard relocation in the OpenSearch cluster

Fixed in 17.2.0, 16.2.0, 17.1.6, 16.1.6

On large managed clusters, shard relocation may fail in the OpenSearch cluster with the yellow or red status of the OpenSearch cluster. The characteristic symptom of the issue is that in the stacklight namespace, the statefulset.apps/opensearch-master containers are experiencing throttling with the KubeContainersCPUThrottlingHigh alert firing for the following set of labels:

{created_by_kind="StatefulSet",created_by_name="opensearch-master",namespace="stacklight"}

Caution

The throttling that OpenSearch is experiencing may be a temporary situation, which may be related, for example, to a peaky load and the ongoing shards initialization as part of disaster recovery or after node restart. In this case, Mirantis recommends waiting until initialization of all shards is finished. After that, verify the cluster state and whether throttling still exists. And only if throttling does not disappear, apply the workaround below.

To verify that the initialization of shards is ongoing:

kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash

curl "http://localhost:9200/_cat/shards" | grep INITIALIZING

Example of system response:

.ds-system-000072    2 r INITIALIZING    10.232.182.135 opensearch-master-1
.ds-system-000073    1 r INITIALIZING    10.232.7.145   opensearch-master-2
.ds-system-000073    2 r INITIALIZING    10.232.182.135 opensearch-master-1
.ds-audit-000001     2 r INITIALIZING    10.232.7.145   opensearch-master-2

The system response above indicates that shards from the .ds-system-000072, .ds-system-000073, and .ds-audit-000001 indicies are in the INITIALIZING state. In this case, Mirantis recommends waiting until this process is finished, and only then consider changing the limit.

You can additionally analyze the exact level of throttling and the current CPU usage on the Kubernetes Containers dashboard in Grafana.

Workaround:

  1. Verify the currently configured CPU requests and limits for the opensearch containers:

    kubectl -n stacklight get statefulset.apps/opensearch-master -o jsonpath="{.spec.template.spec.containers[?(@.name=='opensearch')].resources}"
    

    Example of system response:

    {"limits":{"cpu":"600m","memory":"8Gi"},"requests":{"cpu":"500m","memory":"6Gi"}}
    

    In the example above, the CPU request is 500m and the CPU limit is 600m.

  2. Increase the CPU limit to a reasonably high number.

    For example, the default CPU limit for the clusters with the clusterSize:large parameter set was increased from 8000m to 12000m for StackLight in Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0).

    Note

    For details, on the clusterSize parameter, see MOSK Operations Guide: StackLight configuration parameters - Cluster size.

    If the defaults are already overridden on the affected cluster using the resourcesPerClusterSize or resources parameters as described in MOSK Operations Guide: StackLight configuration parameters - Resource limits, then the exact recommended number depends on the currently set limit.

    Mirantis recommends increasing the limit by 50%. If it does not resolve the issue, another increase iteration will be required.

  3. When you select the required CPU limit, increase it as described in MOSK Operations Guide: StackLight configuration parameters - Resource limits.

    If the CPU limit for the opensearch component is already set, increase it in the Cluster object for the opensearch parameter. Otherwise, the default StackLight limit is used. In this case, increase the CPU limit for the opensearch component using the resources parameter.

  4. Wait until all opensearch-master pods are recreated with the new CPU limits and become running and ready.

    To verify the current CPU limit for every opensearch container in every opensearch-master pod separately:

    kubectl -n stacklight get pod/opensearch-master-<podSuffixNumber> -o jsonpath="{.spec.containers[?(@.name=='opensearch')].resources}"
    

    In the command above, replace <podSuffixNumber> with the name of the pod suffix. For example, pod/opensearch-master-0 or pod/opensearch-master-2.

    Example of system response:

    {"limits":{"cpu":"900m","memory":"8Gi"},"requests":{"cpu":"500m","memory":"6Gi"}}
    

    The waiting time may take up to 20 minutes depending on the cluster size.

If the issue is fixed, the KubeContainersCPUThrottlingHigh alert stops firing immediately, while OpenSearchClusterStatusWarning or OpenSearchClusterStatusCritical can still be firing for some time during shard relocation.

If the KubeContainersCPUThrottlingHigh alert is still firing, proceed with another iteration of the CPU limit increase.

[40020] Rollover policy update is not appllied to the current index

Fixed in 17.2.0, 16.2.0, 17.1.6, 16.1.6

While updating rollover_policy for the current system* and audit* data streams, the update is not applied to indices.

One of indicators that the cluster is most likely affected is the KubeJobFailed alert firing for the elasticsearch-curator job and one or both of the following errors being present in elasticsearch-curator pods that remain in the Error status:

2024-05-31 13:16:04,459 ERROR   Failed to complete action: delete_indices.  <class 'curator.exceptions.FailedExecution'>: Exception encountered.  Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. Exception: RequestError(400, 'illegal_argument_exception', 'index [.ds-audit-000001] is the write index for data stream [audit] and cannot be deleted')

or

2024-05-31 13:16:04,459 ERROR   Failed to complete action: delete_indices.  <class 'curator.exceptions.FailedExecution'>: Exception encountered.  Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. Exception: RequestError(400, 'illegal_argument_exception', 'index [.ds-system-000001] is the write index for data stream [system] and cannot be deleted')

Note

Instead of .ds-audit-000001 or .ds-system-000001 index names, similar names can be present with the same prefix but different suffix numbers.

If the above mentioned alert and errors are present, an immediate action is required, because it indicates that the corresponding index size has already exceeded the space allocated for the index.

To verify that the cluster is affected:

Caution

Verify and apply the workaround to both index patterns, system and audit, separately.

If one of indices is affected, the second one is most likely affected as well. Although in rare cases, only one index may be affected.

  1. Log in to the opensearch-master-0 Pod:

    kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash
    
  2. Verify that the rollover policy is present:

    • system:

      curl localhost:9200/_plugins/_ism/policies/system_rollover_policy
      
    • audit:

      curl localhost:9200/_plugins/_ism/policies/audit_rollover_policy
      

    The cluster is affected if the rollover policy is missing. Otherwise, proceed to the following step.

  3. Verify the system response from the previous step. For example:

    {"_id":"system_rollover_policy","_version":7229,"_seq_no":42362,"_primary_term":28,"policy":{"policy_id":"system_rollover_policy","description":"system index rollover policy.","last_updated_time":1708505222430,"schema_version":19,"error_notification":null,"default_state":"rollover","states":[{"name":"rollover","actions":[{"retry":{"count":3,"backoff":"exponential","delay":"1m"},"rollover":{"min_size":"14746mb","copy_alias":false}}],"transitions":[]}],"ism_template":[{"index_patterns":["system*"],"priority":200,"last_updated_time":1708505222430}]}}
    

    Verify and capture the following items separately for every policy:

    • The _seq_no and _primary_term values

    • The rollover policy threshold, which is defined in policy.states[0].actions[0].rollover.min_size

  4. List indices:

    • system:

      curl localhost:9200/_cat/indices | grep system
      

      Example of system response:

      [...]
      green open .ds-system-000001   FjglnZlcTKKfKNbosaE9Aw 2 1 1998295  0   1gb 507.9mb
      
    • audit:

      curl localhost:9200/_cat/indices | grep audit
      

      Example of system response:

      [...]
      green open .ds-audit-000001   FjglnZlcTKKfKNbosaE9Aw 2 1 1998295  0   1gb 507.9mb
      
  5. Select the index with the highest number and verify the rollover policy attached to the index:

    • system:

      curl localhost:9200/_plugins/_ism/explain/.ds-system-000001
      
    • audit:

      curl localhost:9200/_plugins/_ism/explain/.ds-audit-000001
      
    • If the rollover policy is not attached, the cluster is affected.

    • If the rollover policy is attached but _seq_no and _primary_term numbers do not match the previously captured ones, the cluster is affected.

    • If the index size drastically exceeds the defined threshold of the rollover policy (which is the previously captured min_size), the cluster is most probably affected.

Workaround:

  1. Log in to the opensearch-master-0 Pod:

    kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash
    
  2. If the policy is attached to the index but has different _seq_no and _primary_term, remove the policy from the index:

    Note

    Use the index with the highest number in the name, which was captured during verification procedure.

    • system:

      curl -XPOST localhost:9200/_plugins/_ism/remove/.ds-system-000001
      
    • audit:

      curl -XPOST localhost:9200/_plugins/_ism/remove/.ds-audit-000001
      
  3. Re-add the policy:

    • system:

      curl -XPOST -H "Content-type: application/json" localhost:9200/_plugins/_ism/add/system* -d'{"policy_id":"system_rollover_policy"}'
      
    • audit:

      curl -XPOST -H "Content-type: application/json" localhost:9200/_plugins/_ism/add/audit* -d'{"policy_id":"audit_rollover_policy"}'
      
  4. Perform again the last step of the cluster verification procedure provided above and make sure that the policy is attached to the index and has the same _seq_no and _primary_term.

    If the index size drastically exceeds the defined threshold of the rollover policy (which is the previously captured min_size), wait up to 15 minutes and verify that the additional index is created with the consecutive number in the index name. For example:

    • system: if you applied changes to .ds-system-000001, wait until .ds-system-000002 is created.

    • audit: if you applied changes to .ds-audit-000001, wait until .ds-audit-000002 is created.

    If such index is not created, escalate the issue to Mirantis support.


Container Cloud web UI
[41806] Configuration of a management cluster fails without Keycloak settings

Fixed in 17.1.4 and 16.1.4

During configuration of a management cluster settings using the Configure cluster web UI menu, updating the Keycloak Truststore settings is mandatory, despite being optional.

As a workaround, update the management cluster using the API or CLI.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.26.2. For artifacts of the Cluster releases introduced in 2.26.2, see patch Cluster releases 17.1.2 and 16.1.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries Updated

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20240324195604

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20240324195604

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.39.19.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.39.19.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.39.19.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.39.19.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.39.19.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.39.19.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.39.19.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.39.19

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-26-alpine-20240325100252

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-26-alpine-20240325093002

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-2-26-alpine-20240129155244

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.39.19

ironic

mirantis.azurecr.io/openstack/ironic:yoga-jammy-20240226060024

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:yoga-jammy-20240226060024

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240117102150

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-2-26-alpine-20240129213142

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-55b02f7-20231019172556

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20240311120505

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.24.0-47-gf77368e

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.13.12-31212f9e-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.13.12-31212f9e-amd64

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20240129163811

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.39.19.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.39.19.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.39.19.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.39.19.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.39.19.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.39.19.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.39.19.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.39.19.tgz

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.39.19.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.39.19.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.39.19.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.39.19.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.39.19.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.39.19.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.39.19.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.39.19.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.39.19.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.39.19.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.39.19.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.39.19.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.39.19.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.39.19.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.39.19.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.39.19.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.39.19.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.39.19.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.39.19.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.39.19.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.39.19.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.39.19.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.39.19.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.39.19.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.39.19.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.39.19.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.39.19.tgz

vsphere-cloud-controller-manager

https://binary.mirantis.com/core/helm/vsphere-cloud-controller-manager-1.39.19.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.39.19.tgz

vsphere-csi-plugin

https://binary.mirantis.com/core/helm/vsphere-csi-plugin-1.39.19.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.39.19.tgz

vsphere-vm-template-controller

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.39.19.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.39.19

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.39.19

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.39.19

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.39.19

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.39.19

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-5

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-13

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.39.19

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.39.19

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-4

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-4

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-4

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-4

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-3

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.39.19

frontend Updated

mirantis.azurecr.io/core/frontend:1.39.19

host-os-modules-controller Updated

mirantis.azurecr.io/core/host-os-modules-controller:1.39.19

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.39.19

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.39.19

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.39.19

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.39.19

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.39.19

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-4

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.39.19

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.24.0-47-gf77368e

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.24.0-47-gf77368e

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-6

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.39.19

openstack-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-13

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.39.19

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.39.19

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.39.19

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.39.19

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.39.19

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.39.19

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-9

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.39.19

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.39.19

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.39.19

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.39.19

vsphere-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/vsphere-cloud-controller-manager:v1.27.0-5

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.39.19

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.39.19

vsphere-csi-driver

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-driver:v3.0.2-1

vsphere-csi-syncer

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-syncer:v3.0.2-1

vsphere-vm-template-controller Updated

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.39.19

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/core/helm/iam-1.39.19.tgz

Docker images

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240221023016

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-55b02f7-20231019172556

mariadb

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20231127070342

mcc-keycloak Updated

mirantis.azurecr.io/iam/mcc-keycloak:23.0.6-20240216125244

See also

Patch releases

2.26.1

The Container Cloud patch release 2.26.1, which is based on the 2.26.0 major release, provides the following updates:

  • Support for the patch Cluster releases 16.1.1 and 17.1.1 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 24.1.1.

  • Delivery mechanism for CVE fixes on Ubuntu in bare metal clusters that includes update of Ubuntu kernel minor version. For details, see Enhancements.

  • Security fixes for CVEs in images.

This patch release also supports the latest major Cluster releases 17.1.0 and 16.1.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster release instead.

For main deliverables of the parent Container Cloud release of 2.26.1, refer to 2.26.0.

Enhancements

This section outlines new features and enhancements introduced in the Container Cloud patch release 2.26.1 along with Cluster releases 17.1.1 and 16.1.1.

Delivery mechanism for CVE fixes on Ubuntu in bare metal clusters

Introduced the ability to update Ubuntu packages including kernel minor version update, when available in a Cluster release, for both management and managed bare metal clusters to address CVE issues on a host operating system.

  • On management clusters, the update of Ubuntu mirror along with the update of minor kernel version occurs automatically with cordon-drain and reboot of machines.

  • On managed clusters, the update of Ubuntu mirror along with the update of minor kernel version applies during a manual cluster update without automatic cordon-drain and reboot of machines. After a managed cluster update, all cluster machines have the reboot is required notification. You can manually handle the reboot of machines during a convenient maintenance window using GracefulRebootRequest.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.26.1. For artifacts of the Cluster releases introduced in 2.26.1, see patch Cluster releases 17.1.1 and 16.1.1.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts
Bare metal artifacts

Artifact

Component

Path

Binaries Updated

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20240302181430

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20240302181430

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-155-1882779.tgz

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.39.15.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.39.15.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.39.15.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.39.15.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.39.15.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.39.15.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.39.15.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.39.15

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-26-alpine-20240226130438

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-26-alpine-20240226130310

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-2-26-alpine-20240129155244

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.39.15

ironic Updated

mirantis.azurecr.io/openstack/ironic:yoga-jammy-20240226060024

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:yoga-jammy-20240226060024

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240117102150

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-2-26-alpine-20240129213142

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-55b02f7-20231019172556

mariadb

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20231127070342

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.24.0-47-gf77368e

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.13.12-31212f9e-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.13.12-31212f9e-amd64

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20240129163811

Core artifacts
Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.39.15.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.39.15.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.39.15.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.39.15.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.39.15.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.39.15.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.39.15.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.39.15.tgz

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.39.15.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.39.15.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.39.15.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.39.15.tgz

host-os-modules-controller

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.39.15.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.39.15.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.39.15.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.39.15.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.39.15.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.39.15.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.39.15.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.39.15.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.39.15.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.39.15.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.39.15.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.39.15.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.39.15.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.39.15.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.39.15.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.39.15.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.39.15.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.39.15.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.39.15.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.39.15.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.39.15.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.39.15.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.39.15.tgz

vsphere-cloud-controller-manager

https://binary.mirantis.com/core/helm/vsphere-cloud-controller-manager-1.39.15.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.39.15.tgz

vsphere-csi-plugin

https://binary.mirantis.com/core/helm/vsphere-csi-plugin-1.39.15.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.39.15.tgz

vsphere-vm-template-controller

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.39.15.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.39.15

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.39.15

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.39.15

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.39.15

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.39.15

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-5

cinder-csi-plugin Updated

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-13

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.39.15

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.39.15

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-4

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-4

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-4

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-4

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-3

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.39.15

frontend Updated

mirantis.azurecr.io/core/frontend:1.39.15

host-os-modules-controller Updated

mirantis.azurecr.io/core/host-os-modules-controller:1.39.15

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.39.15

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.39.15

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.39.15

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.39.15

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.39.15

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-4

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.39.15

mcc-haproxy Updated

mirantis.azurecr.io/lcm/mcc-haproxy:v0.24.0-47-gf77368e

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.24.0-47-gf77368e

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-6

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.39.15

openstack-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-13

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.39.15

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.39.15

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.39.15

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.39.15

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.39.15

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.39.15

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-9

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.39.15

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.39.15

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.39.15

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.39.15

vsphere-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/vsphere-cloud-controller-manager:v1.27.0-5

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.39.15

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.39.15

vsphere-csi-driver

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-driver:v3.0.2-1

vsphere-csi-syncer

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-syncer:v3.0.2-1

vsphere-vm-template-controller Updated

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.39.15

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts

iam

https://binary.mirantis.com/core/helm/iam-1.39.15.tgz

Docker images

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.22-20240105023016

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-55b02f7-20231019172556

mariadb

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20231127070342

mcc-keycloak

mirantis.azurecr.io/iam/mcc-keycloak:23.0.3-1

Security notes

The table below includes the total numbers of addressed unique and common CVEs in images by product component since the Container Cloud 2.26.0 major release. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

0

1

1

Common

0

3

3

Kaas core

Unique

0

6

6

Common

0

27

27

StackLight

Unique

0

15

15

Common

0

51

51

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.1.1: Security notes.

Addressed issues

The following issues have been addressed in the Container Cloud patch release 2.26.1 along with the patch Cluster releases 17.1.1 and 16.1.1.

  • [39330] [StackLight] Fixed the issue with the OpenSearch cluster being stuck due to initializing replica shards.

  • [39220] [StackLight] Fixed the issue with Patroni failure due to no limit configuration for the max_timelines_history parameter.

  • [39080] [StackLight] Fixed the issue with the OpenSearchClusterStatusWarning alert firing during cluster upgrade if StackLight is deployed in the HA mode.

  • [38970] [StackLight] Fixed the issue with the Logs dashboard in the OpenSearch Dashboards web UI not working for the system index.

  • [38937] [StackLight] Fixed the issue with the View logs in OpenSearch Dashboards link not working in the Grafana web UI.

  • [40747] [vSphere] Fixed the issue with the unsupported Cluster release being available for greenfield vSphere-based managed cluster deployments in the drop-down menu of the cluster creation window in the Container Cloud web UI.

  • [40036] [LCM] Fixed the issue causing nodes to remain in the Kubernetes cluster when the corresponding Machine object is disabled during cluster update.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.26.1 including the Cluster releases 17.1.1 and 16.1.1.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[46245] Lack of access permissions for HOC and HOCM objects

Fixed in 2.28.0 (17.3.0 and 16.3.0)

When trying to list the HostOSConfigurationModules and HostOSConfiguration custom resources, serviceuser or a user with the global-admin or operator role obtains the access denied error. For example:

kubectl --kubeconfig ~/.kube/mgmt-config get hocm

Error from server (Forbidden): hostosconfigurationmodules.kaas.mirantis.com is forbidden:
User "2d74348b-5669-4c65-af31-6c05dbedac5f" cannot list resource "hostosconfigurationmodules"
in API group "kaas.mirantis.com" at the cluster scope: access denied

Workaround:

  1. Modify the global-admin role by adding a new entry with the following contents to the rules list:

    kubectl edit clusterroles kaas-global-admin
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurationmodules]
      verbs: ['*']
    
  2. For each Container Cloud project, modify the kaas-operator role by adding a new entry with the following contents to the rules list:

    kubectl -n <projectName> edit roles kaas-operator
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurations]
      verbs: ['*']
    
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[41305] DHCP responses are lost between dnsmasq and dhcp-relay pods

Fixed in 2.28.0 (17.3.0 and 16.3.0)

After node maintenance of a management cluster, the newly added nodes may fail to undergo provisioning successfully. The issue relates to new nodes that are in the same L2 domain as the management cluster.

The issue was observed on environments having management cluster nodes configured with a single L2 segment used for all network traffic (PXE and LCM/management networks).

To verify whether the cluster is affected:

Verify whether the dnsmasq and dhcp-relay pods run on the same node in the management cluster:

kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"

Example of system response:

dhcp-relay-7d85f75f76-5vdw2   2/2   Running   2 (36h ago)   36h   10.10.0.122     kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (36h ago)   36h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>

If this is the case, proceed to the workaround below.

Workaround:

  1. Log in to a node that contains kubeconfig of the affected management cluster.

  2. Make sure that at least two management cluster nodes are schedulable:

    kubectl get node
    

    Example of a positive system response:

    NAME                                             STATUS   ROLES    AGE   VERSION
    kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-ad5a6f51-b98f-43c3-91d5-55fed3d0ff21   Ready    master   37h   v1.27.10-mirantis-1
    
  3. Delete the dhcp-relay pod:

    kubectl -n kaas delete pod <dhcp-relay-xxxxx>
    
  4. Verify that the dnsmasq and dhcp-relay pods are scheduled into different nodes:

    kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"
    

    Example of a positive system response:

    dhcp-relay-7d85f75f76-rkv03   2/2   Running   0             49s   10.10.0.121     kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   <none>   <none>
    dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (37h ago)   37h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


LCM
[41540] LCM Agent cannot grab storage information on a host

Fixed in 17.1.5 and 16.1.5

Due to issues with managing physical NVME devices, lcm-agent cannot grab storage information on a host. As a result, lcmmachine.status.hostinfo.hardware is empty and the following example error is present in logs:

{"level":"error","ts":"2024-05-02T12:26:10Z","logger":"agent", \
"msg":"get hardware details", \
"host":"kaas-node-548b2861-aed0-41c9-8ff2-10c5476b000b", \
"error":"new storage info: get disk info \"nvme0c0n1\": \
invoke command: exit status 1","errorVerbose":"exit status 1

As a workaround, on the affected node, create a symlink for any device indicated in lcm-agent logs. For example:

ln -sfn /dev/nvme0n1 /dev/nvme0c0n1
[40811] Pod is stuck in the Terminating state on the deleted node

Fixed in 17.1.3 and 16.1.3

During deletion of a machine, the related DaemonSet Pod can remain on the deleted node in the Terminating state. As a workaround, manually delete the Pod:

kubectl delete pod -n <podNamespace> <podName>
[39437] Failure to replace a master node on a Container Cloud cluster

Fixed in 2.29.0 (17.4.0 and 16.4.0)

During the replacement of a master node on a cluster of any type, the process may get stuck with Kubelet's NodeReady condition is Unknown in the machine status on the remaining master nodes.

As a workaround, log in on the affected node and run the following command:

docker restart ucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>

Ceph
[41819] Graceful cluster reboot is blocked by the Ceph ClusterWorkloadLocks

Fixed in 2.27.0 (17.2.0 and 16.2.0)

During graceful reboot of a cluster with Ceph enabled, the reboot is blocked with the following message in the MiraCephMaintenance object status:

message: ClusterMaintenanceRequest found, Ceph Cluster is not ready to upgrade,
 delaying cluster maintenance

As a workaround, add the following snippet to the cephFS section under metadataServer in the spec section of <kcc-name>.yaml in the Ceph cluster:

cephClusterSpec:
  sharedFilesystem:
    cephFS:
    - name: cephfs-store
      metadataServer:
        activeCount: 1
        healthCheck:
          livenessProbe:
            probe:
              failureThreshold: 5
              initialDelaySeconds: 30
              periodSeconds: 30
              successThreshold: 1
              timeoutSeconds: 5
[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.


StackLight
[42304] Failure of shard relocation in the OpenSearch cluster

Fixed in 17.2.0, 16.2.0, 17.1.6, 16.1.6

On large managed clusters, shard relocation may fail in the OpenSearch cluster with the yellow or red status of the OpenSearch cluster. The characteristic symptom of the issue is that in the stacklight namespace, the statefulset.apps/opensearch-master containers are experiencing throttling with the KubeContainersCPUThrottlingHigh alert firing for the following set of labels:

{created_by_kind="StatefulSet",created_by_name="opensearch-master",namespace="stacklight"}

Caution

The throttling that OpenSearch is experiencing may be a temporary situation, which may be related, for example, to a peaky load and the ongoing shards initialization as part of disaster recovery or after node restart. In this case, Mirantis recommends waiting until initialization of all shards is finished. After that, verify the cluster state and whether throttling still exists. And only if throttling does not disappear, apply the workaround below.

To verify that the initialization of shards is ongoing:

kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash

curl "http://localhost:9200/_cat/shards" | grep INITIALIZING

Example of system response:

.ds-system-000072    2 r INITIALIZING    10.232.182.135 opensearch-master-1
.ds-system-000073    1 r INITIALIZING    10.232.7.145   opensearch-master-2
.ds-system-000073    2 r INITIALIZING    10.232.182.135 opensearch-master-1
.ds-audit-000001     2 r INITIALIZING    10.232.7.145   opensearch-master-2

The system response above indicates that shards from the .ds-system-000072, .ds-system-000073, and .ds-audit-000001 indicies are in the INITIALIZING state. In this case, Mirantis recommends waiting until this process is finished, and only then consider changing the limit.

You can additionally analyze the exact level of throttling and the current CPU usage on the Kubernetes Containers dashboard in Grafana.

Workaround:

  1. Verify the currently configured CPU requests and limits for the opensearch containers:

    kubectl -n stacklight get statefulset.apps/opensearch-master -o jsonpath="{.spec.template.spec.containers[?(@.name=='opensearch')].resources}"
    

    Example of system response:

    {"limits":{"cpu":"600m","memory":"8Gi"},"requests":{"cpu":"500m","memory":"6Gi"}}
    

    In the example above, the CPU request is 500m and the CPU limit is 600m.

  2. Increase the CPU limit to a reasonably high number.

    For example, the default CPU limit for the clusters with the clusterSize:large parameter set was increased from 8000m to 12000m for StackLight in Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0).

    Note

    For details, on the clusterSize parameter, see MOSK Operations Guide: StackLight configuration parameters - Cluster size.

    If the defaults are already overridden on the affected cluster using the resourcesPerClusterSize or resources parameters as described in MOSK Operations Guide: StackLight configuration parameters - Resource limits, then the exact recommended number depends on the currently set limit.

    Mirantis recommends increasing the limit by 50%. If it does not resolve the issue, another increase iteration will be required.

  3. When you select the required CPU limit, increase it as described in MOSK Operations Guide: StackLight configuration parameters - Resource limits.

    If the CPU limit for the opensearch component is already set, increase it in the Cluster object for the opensearch parameter. Otherwise, the default StackLight limit is used. In this case, increase the CPU limit for the opensearch component using the resources parameter.

  4. Wait until all opensearch-master pods are recreated with the new CPU limits and become running and ready.

    To verify the current CPU limit for every opensearch container in every opensearch-master pod separately:

    kubectl -n stacklight get pod/opensearch-master-<podSuffixNumber> -o jsonpath="{.spec.containers[?(@.name=='opensearch')].resources}"
    

    In the command above, replace <podSuffixNumber> with the name of the pod suffix. For example, pod/opensearch-master-0 or pod/opensearch-master-2.

    Example of system response:

    {"limits":{"cpu":"900m","memory":"8Gi"},"requests":{"cpu":"500m","memory":"6Gi"}}
    

    The waiting time may take up to 20 minutes depending on the cluster size.

If the issue is fixed, the KubeContainersCPUThrottlingHigh alert stops firing immediately, while OpenSearchClusterStatusWarning or OpenSearchClusterStatusCritical can still be firing for some time during shard relocation.

If the KubeContainersCPUThrottlingHigh alert is still firing, proceed with another iteration of the CPU limit increase.

[40020] Rollover policy update is not appllied to the current index

Fixed in 17.2.0, 16.2.0, 17.1.6, 16.1.6

While updating rollover_policy for the current system* and audit* data streams, the update is not applied to indices.

One of indicators that the cluster is most likely affected is the KubeJobFailed alert firing for the elasticsearch-curator job and one or both of the following errors being present in elasticsearch-curator pods that remain in the Error status:

2024-05-31 13:16:04,459 ERROR   Failed to complete action: delete_indices.  <class 'curator.exceptions.FailedExecution'>: Exception encountered.  Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. Exception: RequestError(400, 'illegal_argument_exception', 'index [.ds-audit-000001] is the write index for data stream [audit] and cannot be deleted')

or

2024-05-31 13:16:04,459 ERROR   Failed to complete action: delete_indices.  <class 'curator.exceptions.FailedExecution'>: Exception encountered.  Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. Exception: RequestError(400, 'illegal_argument_exception', 'index [.ds-system-000001] is the write index for data stream [system] and cannot be deleted')

Note

Instead of .ds-audit-000001 or .ds-system-000001 index names, similar names can be present with the same prefix but different suffix numbers.

If the above mentioned alert and errors are present, an immediate action is required, because it indicates that the corresponding index size has already exceeded the space allocated for the index.

To verify that the cluster is affected:

Caution

Verify and apply the workaround to both index patterns, system and audit, separately.

If one of indices is affected, the second one is most likely affected as well. Although in rare cases, only one index may be affected.

  1. Log in to the opensearch-master-0 Pod:

    kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash
    
  2. Verify that the rollover policy is present:

    • system:

      curl localhost:9200/_plugins/_ism/policies/system_rollover_policy
      
    • audit:

      curl localhost:9200/_plugins/_ism/policies/audit_rollover_policy
      

    The cluster is affected if the rollover policy is missing. Otherwise, proceed to the following step.

  3. Verify the system response from the previous step. For example:

    {"_id":"system_rollover_policy","_version":7229,"_seq_no":42362,"_primary_term":28,"policy":{"policy_id":"system_rollover_policy","description":"system index rollover policy.","last_updated_time":1708505222430,"schema_version":19,"error_notification":null,"default_state":"rollover","states":[{"name":"rollover","actions":[{"retry":{"count":3,"backoff":"exponential","delay":"1m"},"rollover":{"min_size":"14746mb","copy_alias":false}}],"transitions":[]}],"ism_template":[{"index_patterns":["system*"],"priority":200,"last_updated_time":1708505222430}]}}
    

    Verify and capture the following items separately for every policy:

    • The _seq_no and _primary_term values

    • The rollover policy threshold, which is defined in policy.states[0].actions[0].rollover.min_size

  4. List indices:

    • system:

      curl localhost:9200/_cat/indices | grep system
      

      Example of system response:

      [...]
      green open .ds-system-000001   FjglnZlcTKKfKNbosaE9Aw 2 1 1998295  0   1gb 507.9mb
      
    • audit:

      curl localhost:9200/_cat/indices | grep audit
      

      Example of system response:

      [...]
      green open .ds-audit-000001   FjglnZlcTKKfKNbosaE9Aw 2 1 1998295  0   1gb 507.9mb
      
  5. Select the index with the highest number and verify the rollover policy attached to the index:

    • system:

      curl localhost:9200/_plugins/_ism/explain/.ds-system-000001
      
    • audit:

      curl localhost:9200/_plugins/_ism/explain/.ds-audit-000001
      
    • If the rollover policy is not attached, the cluster is affected.

    • If the rollover policy is attached but _seq_no and _primary_term numbers do not match the previously captured ones, the cluster is affected.

    • If the index size drastically exceeds the defined threshold of the rollover policy (which is the previously captured min_size), the cluster is most probably affected.

Workaround:

  1. Log in to the opensearch-master-0 Pod:

    kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash
    
  2. If the policy is attached to the index but has different _seq_no and _primary_term, remove the policy from the index:

    Note

    Use the index with the highest number in the name, which was captured during verification procedure.

    • system:

      curl -XPOST localhost:9200/_plugins/_ism/remove/.ds-system-000001
      
    • audit:

      curl -XPOST localhost:9200/_plugins/_ism/remove/.ds-audit-000001
      
  3. Re-add the policy:

    • system:

      curl -XPOST -H "Content-type: application/json" localhost:9200/_plugins/_ism/add/system* -d'{"policy_id":"system_rollover_policy"}'
      
    • audit:

      curl -XPOST -H "Content-type: application/json" localhost:9200/_plugins/_ism/add/audit* -d'{"policy_id":"audit_rollover_policy"}'
      
  4. Perform again the last step of the cluster verification procedure provided above and make sure that the policy is attached to the index and has the same _seq_no and _primary_term.

    If the index size drastically exceeds the defined threshold of the rollover policy (which is the previously captured min_size), wait up to 15 minutes and verify that the additional index is created with the consecutive number in the index name. For example:

    • system: if you applied changes to .ds-system-000001, wait until .ds-system-000002 is created.

    • audit: if you applied changes to .ds-audit-000001, wait until .ds-audit-000002 is created.

    If such index is not created, escalate the issue to Mirantis support.


Container Cloud web UI
[41806] Configuration of a management cluster fails without Keycloak settings

Fixed in 17.1.4 and 16.1.4

During configuration of a management cluster settings using the Configure cluster web UI menu, updating the Keycloak Truststore settings is mandatory, despite being optional.

As a workaround, update the management cluster using the API or CLI.

See also

Patch releases

2.26.0

The Mirantis Container Cloud major release 2.26.0:

  • Introduces support for the Cluster release 17.1.0 that is based on the Cluster release 16.1.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 24.1.

  • Introduces support for the Cluster release 16.1.0 that is based on Mirantis Container Runtime (MCR) 23.0.9 and Mirantis Kubernetes Engine (MKE) 3.7.5 with Kubernetes 1.27.

  • Does not support greenfield deployments on deprecated Cluster releases of the 17.0.x and 16.0.x series. Use the latest available Cluster releases of the series instead.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.26.0.

Enhancements

This section outlines new features and enhancements introduced in the Container Cloud release 2.26.0. For the list of enhancements delivered with the Cluster releases introduced by Container Cloud 2.26.0, see 17.1.0 and 16.1.0.

Pre-update inspection of pinned product artifacts in a ‘Cluster’ object

To ensure that Container Cloud clusters remain consistently updated with the latest security fixes and product improvements, the Admission Controller has been enhanced. Now, it actively prevents the utilization of pinned custom artifacts for Container Cloud components. Specifically, it blocks a management or managed cluster release update, or any cluster configuration update, for example, adding public keys or proxy, if a Cluster object contains any custom Container Cloud artifacts with global or image-related values overwritten in the helm-releases section, until these values are removed.

Normally, the Container Cloud clusters do not contain pinned artifacts, which eliminates the need for any pre-update actions in most deployments. However, if the update of your cluster is blocked with the invalid HelmReleases configuration error, refer to Update notes: Pre-update actions for details.

Note

In rare cases, if the image-related or global values should be changed, you can use the ClusterRelease or KaaSRelease objects instead. But make sure to update these values manually after every major and patch update.

Note

The pre-update inspection applies only to images delivered by Container Cloud that are overwritten. Any custom images unrelated to the product components are not verified and do not block cluster update.

Disablement of worker machines on managed clusters

TechPreview

Implemented the machine disabling API that allows you to seamlessly remove a worker machine from the LCM control of a managed cluster. This action isolates the affected node without impacting other machines in the cluster, effectively eliminating it from the Kubernetes cluster. This functionality proves invaluable in scenarios where a malfunctioning machine impedes cluster updates.

Day-2 management API for bare metal clusters

TechPreview

Added initial Technology Preview support for the HostOSConfiguration and HostOSConfigurationModules custom resources in the bare metal provider. These resources introduce configuration modules that allow managing the operating system of a bare metal host granularly without rebuilding the node from scratch. Such approach prevents workload evacuation and significantly reduces configuration time.

Configuration modules manage various settings of the operating system using Ansible playbooks, adhering to specific schemas and metadata requirements. For description of module format, schemas, and rules, contact Mirantis support.

Warning

For security reasons and to ensure safe and reliable cluster operability, contact Mirantis support to start using these custom resources.

Caution

As long as the feature is still on the development stage, Mirantis highly recommends deleting all HostOSConfiguration objects, if any, before automatic upgrade of the management cluster to Container Cloud 2.27.0 (Cluster release 16.2.0). After the upgrade, you can recreate the required objects using the updated parameters.

This precautionary step prevents re-processing and re-applying of existing configuration, which is defined in HostOSConfiguration objects, during management cluster upgrade to 2.27.0. Such behavior is caused by changes in the HostOSConfiguration API introduced in 2.27.0.

Strict filtering for devices on bare metal clusters

Implemented the strict byID filtering for targeting system disks using specific device options: byPath, serialNumber, and wwn. These options offer a more reliable alternative to the unpredictable byName naming format.

Mirantis recommends adopting these new device naming options when adding new nodes and redeploying existing ones to ensure a predictable and stable device naming schema.

Dynamic IP allocation for faster host provisioning

Introduced a mechanism in the Container Cloud dnsmasq server to dynamically allocate IP addresses for baremetal hosts during provisioning. This new mechanism replaces sequential IP allocation that includes the ping check with dynamic IP allocation without the ping check. Such behavior significantly increases the amount of baremetal servers that you can provision in parallel, which allows you to streamline the process of setting up a large managed cluster.

Support for Kubernetes auditing and profiling on management clusters

Added support for the Kubernetes auditing and profiling enablement and configuration on management clusters. The auditing option is enabled by default. You can configure both options using Cluster object of the management cluster.

Note

For managed clusters, you can also configure Kubernetes auditing along with profiling using the Cluster object of a managed cluster.

Cleanup of LVM thin pool volumes during cluster provisioning

Implemented automatic cleanup of LVM thin pool volumes during the provisioning stage to prevent issues with logical volume detection before removal, which could cause node cleanup failure during cluster redeployment.

Wiping a device or partition before a bare metal cluster deployment

Implemented the capability to erase existing data from hardware devices to be used for a bare metal management or managed cluster deployment. Using the new wipeDevice structure, you can either erase an existing partition or remove all existing partitions from a physical device. For these purposes, use the eraseMetadata or eraseDevice option that configures cleanup behavior during configuration of a custom bare metal host profile.

Note

The wipeDevice option replaces the deprecated wipe option that will be removed in one of the following releases. For backward compatibility, any existing wipe: true option is automatically converted to the following structure:

wipeDevice:
  eraseMetadata:
    enabled: True
Policy Controller for validating pod image signatures

Technology Preview

Introduced initial Technology Preview support for the Policy Controller that validates signatures of pod images. The Policy Controller verifies that images used by the Container Cloud and Mirantis OpenStack for Kubernetes controllers are signed by a trusted authority. The Policy Controller inspects defined image policies that list Docker registries and authorities for signature validation.

Configuring trusted certificates for Keycloak

Added support for configuring Keycloak truststore using the Container Cloud web UI to allow for a proper validation of client self-signed certificates. The truststore is used to ensure secured connection to identity brokers, LDAP identity providers, and others.

Health monitoring of cluster LCM operations

Added the LCM Operation condition to monitor health of all LCM operations on a cluster and its machines that is useful during cluster update. You can monitor the status of LCM operations using the the Container Cloud web UI in the status hover menus of a cluster and machine.

Container Cloud web UI improvements for bare metal

Reorganized the Container Cloud web UI to optimize the baremetal-based managed cluster deployment and management:

  • Moved the L2 Templates and Subnets tabs from the Clusters menu to the separate Networks tab on the left sidebar.

  • Improved the Create Subnet menu by adding configuration for different subnet types.

  • Reorganized the Baremetal tab in the left sidebar that now contains Hosts, Hosts Profiles, and Credentials tabs.

  • Implemented the ability to add bare metal host profiles using the web UI.

  • Moved description of a baremetal host to Host info located in a baremetal host kebab menu on the Hosts page of the Baremetal tab.

  • Moved description of baremetal host credentials to Credential info located in a credential kebab menu on the Credentials page of the Baremetal tab.

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added the documentation on how to export logs from OpenSearch dashboards to CSV.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.26.0 along with the Cluster releases 17.1.0 and 16.1.0.

Note

This section provides descriptions of issues addressed since the last Container Cloud patch release 2.25.4.

For details on addressed issues in earlier patch releases since 2.25.0, which are also included into the major release 2.26.0, refer to 2.25.x patch releases.

  • [32761] [LCM] Fixed the issue with node cleanup failing on MOSK clusters due to the Ansible provisioner hanging in a loop while trying to remove LVM thin pool logical volumes, which occurred due to issues with volume detection before removal during cluster redeployment. The issue resolution comprises implementation of automatic cleanup of LVM thin pool volumes during the provisioning stage.

  • [36924] [LCM] Fixed the issue with Ansible starting to run on nodes of a managed cluster after the mcc-cache certificate is applied on a management cluster.

  • [37268] [LCM] Fixed the issue with Container Cloud cluster being blocked by a node stuck in the Prepare or Deploy state with error processing package openssh-server. The issue was caused by customizations in /etc/ssh/sshd_config, such as additional Match statements.

  • [34820] [Ceph] Fixed the issue with the Ceph rook-operator failing to connect to Ceph RADOS Gateway pods on clusters with the Federal Information Processing Standard mode enabled.

  • [38340] [StackLight] Fixed the issue with Telegraf Docker Swarm timing out while collecting data by increasing its timeout from 10 to 25 seconds.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.26.0 including the Cluster releases 17.1.0 and 16.1.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[46245] Lack of access permissions for HOC and HOCM objects

Fixed in 2.28.0 (17.3.0 and 16.3.0)

When trying to list the HostOSConfigurationModules and HostOSConfiguration custom resources, serviceuser or a user with the global-admin or operator role obtains the access denied error. For example:

kubectl --kubeconfig ~/.kube/mgmt-config get hocm

Error from server (Forbidden): hostosconfigurationmodules.kaas.mirantis.com is forbidden:
User "2d74348b-5669-4c65-af31-6c05dbedac5f" cannot list resource "hostosconfigurationmodules"
in API group "kaas.mirantis.com" at the cluster scope: access denied

Workaround:

  1. Modify the global-admin role by adding a new entry with the following contents to the rules list:

    kubectl edit clusterroles kaas-global-admin
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurationmodules]
      verbs: ['*']
    
  2. For each Container Cloud project, modify the kaas-operator role by adding a new entry with the following contents to the rules list:

    kubectl -n <projectName> edit roles kaas-operator
    
    - apiGroups: [kaas.mirantis.com]
      resources: [hostosconfigurations]
      verbs: ['*']
    
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[41305] DHCP responses are lost between dnsmasq and dhcp-relay pods

Fixed in 2.28.0 (17.3.0 and 16.3.0)

After node maintenance of a management cluster, the newly added nodes may fail to undergo provisioning successfully. The issue relates to new nodes that are in the same L2 domain as the management cluster.

The issue was observed on environments having management cluster nodes configured with a single L2 segment used for all network traffic (PXE and LCM/management networks).

To verify whether the cluster is affected:

Verify whether the dnsmasq and dhcp-relay pods run on the same node in the management cluster:

kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"

Example of system response:

dhcp-relay-7d85f75f76-5vdw2   2/2   Running   2 (36h ago)   36h   10.10.0.122     kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (36h ago)   36h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>

If this is the case, proceed to the workaround below.

Workaround:

  1. Log in to a node that contains kubeconfig of the affected management cluster.

  2. Make sure that at least two management cluster nodes are schedulable:

    kubectl get node
    

    Example of a positive system response:

    NAME                                             STATUS   ROLES    AGE   VERSION
    kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   Ready    master   37h   v1.27.10-mirantis-1
    kaas-node-ad5a6f51-b98f-43c3-91d5-55fed3d0ff21   Ready    master   37h   v1.27.10-mirantis-1
    
  3. Delete the dhcp-relay pod:

    kubectl -n kaas delete pod <dhcp-relay-xxxxx>
    
  4. Verify that the dnsmasq and dhcp-relay pods are scheduled into different nodes:

    kubectl -n kaas get pods -o wide| grep -e "dhcp\|dnsmasq"
    

    Example of a positive system response:

    dhcp-relay-7d85f75f76-rkv03   2/2   Running   0             49s   10.10.0.121     kaas-node-bcedb87b-b3ce-46a4-a4ca-ea3068689e40   <none>   <none>
    dnsmasq-8f4b484b4-slhbd       5/5   Running   1 (37h ago)   37h   10.233.123.75   kaas-node-8a24b81c-76d0-4d4c-8421-962bd39df5ad   <none>   <none>
    
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


vSphere
[40747] Unsupported Cluster release is available for managed cluster deployment

Fixed in 2.26.1

The Cluster release 16.0.0, which is not supported for greenfield vSphere-based deployments, is still available in the drop-down menu of the cluster creation window in the Container Cloud web UI.

Do not select this Cluster release to prevent deployment failures. Use the latest supported version instead.


LCM
[41540] LCM Agent cannot grab storage information on a host

Fixed in 17.1.5 and 16.1.5

Due to issues with managing physical NVME devices, lcm-agent cannot grab storage information on a host. As a result, lcmmachine.status.hostinfo.hardware is empty and the following example error is present in logs:

{"level":"error","ts":"2024-05-02T12:26:10Z","logger":"agent", \
"msg":"get hardware details", \
"host":"kaas-node-548b2861-aed0-41c9-8ff2-10c5476b000b", \
"error":"new storage info: get disk info \"nvme0c0n1\": \
invoke command: exit status 1","errorVerbose":"exit status 1

As a workaround, on the affected node, create a symlink for any device indicated in lcm-agent logs. For example:

ln -sfn /dev/nvme0n1 /dev/nvme0c0n1
[40036] Node is not removed from a cluster when its Machine is disabled

Fixed in 2.26.1 (17.1.1 and 16.1.1)

During the ClusterRelease update of a MOSK cluster, a node cannot be removed from the Kubernetes cluster if the related Machine object is disabled.

As a workaround, remove the finalizer from the affected Node object.

[39437] Failure to replace a master node on a Container Cloud cluster

Fixed in 2.29.0 (17.4.0 and 16.4.0)

During the replacement of a master node on a cluster of any type, the process may get stuck with Kubelet's NodeReady condition is Unknown in the machine status on the remaining master nodes.

As a workaround, log in on the affected node and run the following command:

docker restart ucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>

Ceph
[41819] Graceful cluster reboot is blocked by the Ceph ClusterWorkloadLocks

Fixed in 2.27.0 (17.2.0 and 16.2.0)

During graceful reboot of a cluster with Ceph enabled, the reboot is blocked with the following message in the MiraCephMaintenance object status:

message: ClusterMaintenanceRequest found, Ceph Cluster is not ready to upgrade,
 delaying cluster maintenance

As a workaround, add the following snippet to the cephFS section under metadataServer in the spec section of <kcc-name>.yaml in the Ceph cluster:

cephClusterSpec:
  sharedFilesystem:
    cephFS:
    - name: cephfs-store
      metadataServer:
        activeCount: 1
        healthCheck:
          livenessProbe:
            probe:
              failureThreshold: 5
              initialDelaySeconds: 30
              periodSeconds: 30
              successThreshold: 1
              timeoutSeconds: 5
[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.


StackLight
[44193] OpenSearch reaches 85% disk usage watermark affecting the cluster state

Fixed in 2.29.0 (17.4.0 and 16.4.0)

On High Availability (HA) clusters that use Local Volume Provisioner (LVP), Prometheus and OpenSearch from StackLight may share the same pool of storage. In such configuration, OpenSearch may approach the 85% disk usage watermark due to the combined storage allocation and usage patterns set by the Persistent Volume Claim (PVC) size parameters for Prometheus and OpenSearch, which consume storage the most.

When the 85% threshold is reached, the affected node is transitioned to the read-only state, preventing shard allocation and causing the OpenSearch cluster state to transition to Warning (Yellow) or Critical (Red).

Caution

The issue and the provided workaround apply only for clusters on which OpenSearch and Prometheus utilize the same storage pool.

To verify that the cluster is affected:

  1. Verify the result of the following formula:

    0.8 × OpenSearch_PVC_Size_GB + Prometheus_PVC_Size_GB > 0.85 × Total_Storage_Capacity_GB
    

    In the formula, define the following values:

    OpenSearch_PVC_Size_GB

    Derived from .values.elasticsearch.persistentVolumeUsableStorageSizeGB, defaulting to .values.elasticsearch.persistentVolumeClaimSize if unspecified. To obtain the OpenSearch PVC size:

    kubectl -n <namespaceName> get cluster <clusterName> -o yaml |\
    yq '.spec.providerSpec.value.helmReleases[] | select(.name == "stacklight") | .values.elasticsearch.persistentVolumeClaimSize '
    

    Example of system response:

    10000Gi
    
    Prometheus_PVC_Size_GB

    Sourced from .values.prometheusServer.persistentVolumeClaimSize. To obtain the Prometheus PVC size:

    kubectl -n <namespaceName> get cluster <clusterName> -o yaml |\
    yq '.spec.providerSpec.value.helmReleases[] | select(.name == "stacklight") | .values.prometheusServer.persistentVolumeClaimSize '
    

    Example of system response:

    4000Gi
    
    Total_Storage_Capacity_GB

    Total capacity of the OpenSearch PVCs. For LVP, the capacity of the storage pool. To obtain the total capacity:

    kubectl get pvc -n stacklight -l app=opensearch-master \
    -o custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage
    

    The system response contains multiple outputs, one per opensearch-master node. Select the capacity for the affected node.

    Note

    Convert the values to GB if they are set in different units.

    If the formula result is positive, it is an early indication that the cluster is affected.

  2. Verify whether the OpenSearchClusterStatusWarning or OpenSearchClusterStatusCritical alert is firing. And if so, verify the following:

    1. Log in to the OpenSearch web UI.

    2. In Management -> Dev Tools, run the following command:

      GET _cluster/allocation/explain
      

      The following system response indicates that the corresponding node is affected:

      "explanation": "the node is above the low watermark cluster setting \
      [cluster.routing.allocation.disk.watermark.low=85%], using more disk space \
      than the maximum allowed [85.0%], actual free: [xx.xxx%]"
      

      Note

      The system response may contain even higher watermark percent than 85.0%, depending on the case.

Workaround:

Warning

The workaround implies adjustement of the retention threshold for OpenSearch. And depending on the new threshold, some old logs will be deleted.

  1. Adjust or set .values.elasticsearch.persistentVolumeUsableStorageSizeGB to a lower value for the affection check formula to be non-positive. For configuration details, see MOSK Operations Guide: StackLight configuration parameters - OpenSearch.

    Mirantis also recommends reserving some space for other PVCs using storage from the pool. Use the following formula to calculate the required space:

    persistentVolumeUsableStorageSizeGB =
    0.84 × ((1 - Reserved_Percentage - Filesystem_Reserve) ×
    Total_Storage_Capacity_GB - Prometheus_PVC_Size_GB) /
    0.8
    

    In the formula, define the following values:

    Reserved_Percentage

    A user-defined variable that specifies what percentage of the total storage capacity should not be used by OpenSearch or Prometheus. This is used to reserve space for other components. It should be expressed as a decimal. For example, for 5% of reservation, Reserved_Percentage is 0.05. Mirantis recommends using 0.05 as a starting point.

    Filesystem_Reserve

    Percentage to deduct for filesystems that may reserve some portion of the available storage, which is marked as occupied. For example, for EXT4, it is 5% by default, so the value must be 0.05.

    Prometheus_PVC_Size_GB

    Sourced from .values.prometheusServer.persistentVolumeClaimSize.

    Total_Storage_Capacity_GB

    Total capacity of the OpenSearch PVCs. For LVP, the capacity of the storage pool. To obtain the total capacity:

    kubectl get pvc -n stacklight -l app=opensearch-master \
    -o custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage
    

    The system response contains multiple outputs, one per opensearch-master node. Select the capacity for the affected node.

    Note

    Convert the values to GB if they are set in different units.

    Calculation of above formula provides a maximum safe storage to allocate for .values.elasticsearch.persistentVolumeUsableStorageSizeGB. Use this formula as a reference for setting .values.elasticsearch.persistentVolumeUsableStorageSizeGB on a cluster.

  2. Wait up to 15-20 mins for OpenSearch to perform the cleaning.

  3. Verify that the cluster is not affected anymore using the procedure above.

[42304] Failure of shard relocation in the OpenSearch cluster

Fixed in 17.2.0, 16.2.0, 17.1.6, 16.1.6

On large managed clusters, shard relocation may fail in the OpenSearch cluster with the yellow or red status of the OpenSearch cluster. The characteristic symptom of the issue is that in the stacklight namespace, the statefulset.apps/opensearch-master containers are experiencing throttling with the KubeContainersCPUThrottlingHigh alert firing for the following set of labels:

{created_by_kind="StatefulSet",created_by_name="opensearch-master",namespace="stacklight"}

Caution

The throttling that OpenSearch is experiencing may be a temporary situation, which may be related, for example, to a peaky load and the ongoing shards initialization as part of disaster recovery or after node restart. In this case, Mirantis recommends waiting until initialization of all shards is finished. After that, verify the cluster state and whether throttling still exists. And only if throttling does not disappear, apply the workaround below.

To verify that the initialization of shards is ongoing:

kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash

curl "http://localhost:9200/_cat/shards" | grep INITIALIZING

Example of system response:

.ds-system-000072    2 r INITIALIZING    10.232.182.135 opensearch-master-1
.ds-system-000073    1 r INITIALIZING    10.232.7.145   opensearch-master-2
.ds-system-000073    2 r INITIALIZING    10.232.182.135 opensearch-master-1
.ds-audit-000001     2 r INITIALIZING    10.232.7.145   opensearch-master-2

The system response above indicates that shards from the .ds-system-000072, .ds-system-000073, and .ds-audit-000001 indicies are in the INITIALIZING state. In this case, Mirantis recommends waiting until this process is finished, and only then consider changing the limit.

You can additionally analyze the exact level of throttling and the current CPU usage on the Kubernetes Containers dashboard in Grafana.

Workaround:

  1. Verify the currently configured CPU requests and limits for the opensearch containers:

    kubectl -n stacklight get statefulset.apps/opensearch-master -o jsonpath="{.spec.template.spec.containers[?(@.name=='opensearch')].resources}"
    

    Example of system response:

    {"limits":{"cpu":"600m","memory":"8Gi"},"requests":{"cpu":"500m","memory":"6Gi"}}
    

    In the example above, the CPU request is 500m and the CPU limit is 600m.

  2. Increase the CPU limit to a reasonably high number.

    For example, the default CPU limit for the clusters with the clusterSize:large parameter set was increased from 8000m to 12000m for StackLight in Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0).

    Note

    For details, on the clusterSize parameter, see MOSK Operations Guide: StackLight configuration parameters - Cluster size.

    If the defaults are already overridden on the affected cluster using the resourcesPerClusterSize or resources parameters as described in MOSK Operations Guide: StackLight configuration parameters - Resource limits, then the exact recommended number depends on the currently set limit.

    Mirantis recommends increasing the limit by 50%. If it does not resolve the issue, another increase iteration will be required.

  3. When you select the required CPU limit, increase it as described in MOSK Operations Guide: StackLight configuration parameters - Resource limits.

    If the CPU limit for the opensearch component is already set, increase it in the Cluster object for the opensearch parameter. Otherwise, the default StackLight limit is used. In this case, increase the CPU limit for the opensearch component using the resources parameter.

  4. Wait until all opensearch-master pods are recreated with the new CPU limits and become running and ready.

    To verify the current CPU limit for every opensearch container in every opensearch-master pod separately:

    kubectl -n stacklight get pod/opensearch-master-<podSuffixNumber> -o jsonpath="{.spec.containers[?(@.name=='opensearch')].resources}"
    

    In the command above, replace <podSuffixNumber> with the name of the pod suffix. For example, pod/opensearch-master-0 or pod/opensearch-master-2.

    Example of system response:

    {"limits":{"cpu":"900m","memory":"8Gi"},"requests":{"cpu":"500m","memory":"6Gi"}}
    

    The waiting time may take up to 20 minutes depending on the cluster size.

If the issue is fixed, the KubeContainersCPUThrottlingHigh alert stops firing immediately, while OpenSearchClusterStatusWarning or OpenSearchClusterStatusCritical can still be firing for some time during shard relocation.

If the KubeContainersCPUThrottlingHigh alert is still firing, proceed with another iteration of the CPU limit increase.

[40020] Rollover policy update is not appllied to the current index

Fixed in 17.2.0, 16.2.0, 17.1.6, 16.1.6

While updating rollover_policy for the current system* and audit* data streams, the update is not applied to indices.

One of indicators that the cluster is most likely affected is the KubeJobFailed alert firing for the elasticsearch-curator job and one or both of the following errors being present in elasticsearch-curator pods that remain in the Error status:

2024-05-31 13:16:04,459 ERROR   Failed to complete action: delete_indices.  <class 'curator.exceptions.FailedExecution'>: Exception encountered.  Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. Exception: RequestError(400, 'illegal_argument_exception', 'index [.ds-audit-000001] is the write index for data stream [audit] and cannot be deleted')

or

2024-05-31 13:16:04,459 ERROR   Failed to complete action: delete_indices.  <class 'curator.exceptions.FailedExecution'>: Exception encountered.  Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. Exception: RequestError(400, 'illegal_argument_exception', 'index [.ds-system-000001] is the write index for data stream [system] and cannot be deleted')

Note

Instead of .ds-audit-000001 or .ds-system-000001 index names, similar names can be present with the same prefix but different suffix numbers.

If the above mentioned alert and errors are present, an immediate action is required, because it indicates that the corresponding index size has already exceeded the space allocated for the index.

To verify that the cluster is affected:

Caution

Verify and apply the workaround to both index patterns, system and audit, separately.

If one of indices is affected, the second one is most likely affected as well. Although in rare cases, only one index may be affected.

  1. Log in to the opensearch-master-0 Pod:

    kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash
    
  2. Verify that the rollover policy is present:

    • system:

      curl localhost:9200/_plugins/_ism/policies/system_rollover_policy
      
    • audit:

      curl localhost:9200/_plugins/_ism/policies/audit_rollover_policy
      

    The cluster is affected if the rollover policy is missing. Otherwise, proceed to the following step.

  3. Verify the system response from the previous step. For example:

    {"_id":"system_rollover_policy","_version":7229,"_seq_no":42362,"_primary_term":28,"policy":{"policy_id":"system_rollover_policy","description":"system index rollover policy.","last_updated_time":1708505222430,"schema_version":19,"error_notification":null,"default_state":"rollover","states":[{"name":"rollover","actions":[{"retry":{"count":3,"backoff":"exponential","delay":"1m"},"rollover":{"min_size":"14746mb","copy_alias":false}}],"transitions":[]}],"ism_template":[{"index_patterns":["system*"],"priority":200,"last_updated_time":1708505222430}]}}
    

    Verify and capture the following items separately for every policy:

    • The _seq_no and _primary_term values

    • The rollover policy threshold, which is defined in policy.states[0].actions[0].rollover.min_size

  4. List indices:

    • system:

      curl localhost:9200/_cat/indices | grep system
      

      Example of system response:

      [...]
      green open .ds-system-000001   FjglnZlcTKKfKNbosaE9Aw 2 1 1998295  0   1gb 507.9mb
      
    • audit:

      curl localhost:9200/_cat/indices | grep audit
      

      Example of system response:

      [...]
      green open .ds-audit-000001   FjglnZlcTKKfKNbosaE9Aw 2 1 1998295  0   1gb 507.9mb
      
  5. Select the index with the highest number and verify the rollover policy attached to the index:

    • system:

      curl localhost:9200/_plugins/_ism/explain/.ds-system-000001
      
    • audit:

      curl localhost:9200/_plugins/_ism/explain/.ds-audit-000001
      
    • If the rollover policy is not attached, the cluster is affected.

    • If the rollover policy is attached but _seq_no and _primary_term numbers do not match the previously captured ones, the cluster is affected.

    • If the index size drastically exceeds the defined threshold of the rollover policy (which is the previously captured min_size), the cluster is most probably affected.

Workaround:

  1. Log in to the opensearch-master-0 Pod:

    kubectl exec -it pod/opensearch-master-0 -n stacklight -c opensearch -- bash
    
  2. If the policy is attached to the index but has different _seq_no and _primary_term, remove the policy from the index:

    Note

    Use the index with the highest number in the name, which was captured during verification procedure.

    • system:

      curl -XPOST localhost:9200/_plugins/_ism/remove/.ds-system-000001
      
    • audit:

      curl -XPOST localhost:9200/_plugins/_ism/remove/.ds-audit-000001
      
  3. Re-add the policy:

    • system:

      curl -XPOST -H "Content-type: application/json" localhost:9200/_plugins/_ism/add/system* -d'{"policy_id":"system_rollover_policy"}'
      
    • audit:

      curl -XPOST -H "Content-type: application/json" localhost:9200/_plugins/_ism/add/audit* -d'{"policy_id":"audit_rollover_policy"}'
      
  4. Perform again the last step of the cluster verification procedure provided above and make sure that the policy is attached to the index and has the same _seq_no and _primary_term.

    If the index size drastically exceeds the defined threshold of the rollover policy (which is the previously captured min_size), wait up to 15 minutes and verify that the additional index is created with the consecutive number in the index name. For example:

    • system: if you applied changes to .ds-system-000001, wait until .ds-system-000002 is created.

    • audit: if you applied changes to .ds-audit-000001, wait until .ds-audit-000002 is created.

    If such index is not created, escalate the issue to Mirantis support.


Container Cloud web UI
[41806] Configuration of a management cluster fails without Keycloak settings

Fixed in 17.1.4 and 16.1.4

During configuration of a management cluster settings using the Configure cluster web UI menu, updating the Keycloak Truststore settings is mandatory, despite being optional.

As a workaround, update the management cluster using the API or CLI.

Components versions

The following table lists the major components and their versions delivered in the Container Cloud 2.26.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

Bare metal Updated

ambasador

1.39.13

baremetal-dnsmasq

base-2-26-alpine-20240129134230

baremetal-operator

base-2-26-alpine-20240129135007

baremetal-provider

1.39.13

bm-collective

base-2-26-alpine-20240129155244

cluster-api-provider-baremetal

1.39.13

ironic

yoga-jammy-20240108060019

ironic-inspector

yoga-jammy-20240108060019

ironic-prometheus-exporter

0.1-20240117102150

kaas-ipam

base-2-26-alpine-20240129213142

kubernetes-entrypoint

1.0.1-55b02f7-20231019172556

mariadb

10.6.14-focal-20231127070342

metallb-controller

0.13.12-31212f9e-amd64

metallb-speaker

0.13.12-31212f9e-amd64

syslog-ng

base-alpine-20240129163811

Container Cloud

admission-controller Updated

1.39.13

agent-controller Updated

1.39.13

byo-cluster-api-controller New

1.39.13

byo-credentials-controller New

1.39.13

ceph-kcc-controller Updated

1.39.13

cert-manager-controller

1.11.0-5

cinder-csi-plugin Updated

1.27.2-11

client-certificate-controller Updated

1.39.13

configuration-collector Updated

1.39.13

csi-attacher Updated

4.2.0-4

csi-node-driver-registrar Updated

2.7.0-4

csi-provisioner Updated

3.4.1-4

csi-resizer Updated

1.7.0-4

csi-snapshotter Updated

6.2.1-mcc-3

event-controller Updated

1.39.13

frontend Updated

1.39.13

golang

1.20.4-alpine3.17

iam-controller Updated

1.39.13

kaas-exporter Updated

1.39.13

kproxy Updated

1.39.13

lcm-controller Updated

1.39.13

license-controller Updated

1.39.13

livenessprobe Updated

2.9.0-4

machinepool-controller Updated

1.38.17

mcc-haproxy Updated

0.24.0-46-gdaf7dbc

metrics-server Updated

0.6.3-6

nginx Updated

1.39.13

policy-controller New

1.39.13

portforward-controller Updated

1.39.13

proxy-controller Updated

1.39.13

rbac-controller Updated

1.39.13

registry Updated

2.8.1-9

release-controller Updated

1.39.13

rhellicense-controller Updated

1.39.13

scope-controller Updated

1.39.13

storage-discovery Updated

1.39.13

user-controller Updated

1.39.13

IAM

iam Updated

1.39.13

iam-controller Updated

1.39.13

keycloak Removed

n/a

mcc-keycloak New

23.0.3-1

OpenStack Updated

host-os-modules-controller New

1.39.13

openstack-cloud-controller-manager

v1.27.2-12

openstack-cluster-api-controller

1.39.13

openstack-provider

1.39.13

os-credentials-controller

1.39.13

VMware vSphere

mcc-keepalived Updated

0.24.0-46-gdaf7dbc

squid-proxy

0.0.1-10-g24a0d69

vsphere-cloud-controller-manager New

v1.27.0-5

vsphere-cluster-api-controller Updated

1.39.13

vsphere-credentials-controller Updated

1.39.13

vsphere-csi-driver New

v3.0.2-1

vsphere-csi-syncer New

v3.0.2-1

vsphere-provider Updated

1.39.13

vsphere-vm-template-controller Updated

1.39.13

Artifacts

This section lists the artifacts of components included in the Container Cloud release 2.26.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts
Bare metal artifacts

Artifact

Component

Path

Binaries Updated

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20240201183421

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20240201183421

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-146-1bd8e71.tgz

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.39.13.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.39.13.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.39.13.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.39.13.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.39.13.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.39.13.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.39.13.tgz

Docker images Updated

ambasador

mirantis.azurecr.io/core/external/nginx:1.39.13

baremetal-dnsmasq

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-26-alpine-20240129134230

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-2-26-alpine-20240129135007

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-2-26-alpine-20240129155244

cluster-api-provider-baremetal

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.39.13

ironic

mirantis.azurecr.io/openstack/ironic:yoga-jammy-20240108060019

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:yoga-jammy-20240108060019

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20240117102150

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-2-26-alpine-20240129213142

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-55b02f7-20231019172556

mariadb

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20231127070342

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.24.0-46-gdaf7dbc

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.13.12-31212f9e-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.13.12-31212f9e-amd64

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20240129163811

Core artifacts
Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.39.13.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.39.13.tgz

Helm charts

admission-controller Updated

https://binary.mirantis.com/core/helm/admission-controller-1.39.13.tgz

agent-controller Updated

https://binary.mirantis.com/core/helm/agent-controller-1.39.13.tgz

byo-credentials-controller New

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.39.13.tgz

byo-provider New

https://binary.mirantis.com/core/helm/byo-provider-1.39.13.tgz

ceph-kcc-controller Updated

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.39.13.tgz

cert-manager Updated

https://binary.mirantis.com/core/helm/cert-manager-1.39.13.tgz

cinder-csi-plugin Updated

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.39.13.tgz

client-certificate-controller Updated

https://binary.mirantis.com/core/helm/client-certificate-controller-1.39.13.tgz

configuration-collector Updated

https://binary.mirantis.com/core/helm/configuration-collector-1.39.13.tgz

event-controller Updated

https://binary.mirantis.com/core/helm/event-controller-1.39.13.tgz

host-os-modules-controller New

https://binary.mirantis.com/core/helm/host-os-modules-controller-1.39.13.tgz

iam-controller Updated

https://binary.mirantis.com/core/helm/iam-controller-1.39.13.tgz

kaas-exporter Updated

https://binary.mirantis.com/core/helm/kaas-exporter-1.39.13.tgz

kaas-public-api Updated

https://binary.mirantis.com/core/helm/kaas-public-api-1.39.13.tgz

kaas-ui Updated

https://binary.mirantis.com/core/helm/kaas-ui-1.39.13.tgz

lcm-controller Updated

https://binary.mirantis.com/core/helm/lcm-controller-1.39.13.tgz

license-controller Updated

https://binary.mirantis.com/core/helm/license-controller-1.39.13.tgz

machinepool-controller Updated

https://binary.mirantis.com/core/helm/machinepool-controller-1.39.13.tgz

mcc-cache Updated

https://binary.mirantis.com/core/helm/mcc-cache-1.39.13.tgz

mcc-cache-warmup Updated

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.39.13.tgz

metrics-server Updated

https://binary.mirantis.com/core/helm/metrics-server-1.39.13.tgz

openstack-cloud-controller-manager Updated

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.39.13.tgz

openstack-provider Updated

https://binary.mirantis.com/core/helm/openstack-provider-1.39.13.tgz

os-credentials-controller Updated

https://binary.mirantis.com/core/helm/os-credentials-controller-1.39.13.tgz

policy-controller New

https://binary.mirantis.com/core/helm/policy-controller-1.39.13.tgz

portforward-controller Updated

https://binary.mirantis.com/core/helm/portforward-controller-1.39.13.tgz

proxy-controller Updated

https://binary.mirantis.com/core/helm/proxy-controller-1.39.13.tgz

rbac-controller Updated

https://binary.mirantis.com/core/helm/rbac-controller-1.39.13.tgz

release-controller Updated

https://binary.mirantis.com/core/helm/release-controller-1.39.13.tgz

rhellicense-controller Updated

https://binary.mirantis.com/core/helm/rhellicense-controller-1.39.13.tgz

scope-controller Updated

https://binary.mirantis.com/core/helm/scope-controller-1.39.13.tgz

squid-proxy Updated

https://binary.mirantis.com/core/helm/squid-proxy-1.39.13.tgz

user-controller Updated

https://binary.mirantis.com/core/helm/user-controller-1.39.13.tgz

vsphere-cloud-controller-manager New

https://binary.mirantis.com/core/helm/vsphere-cloud-controller-manager-1.39.13.tgz

vsphere-credentials-controller Updated

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.39.13.tgz

vsphere-csi-plugin New

https://binary.mirantis.com/core/helm/vsphere-csi-plugin-1.39.13.tgz

vsphere-provider Updated

https://binary.mirantis.com/core/helm/vsphere-provider-1.39.13.tgz

vsphere-vm-template-controller Updated

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.39.13.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.39.13

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.39.13

byo-cluster-api-controller New

mirantis.azurecr.io/core/byo-cluster-api-controller:1.39.13

byo-credentials-controller New

mirantis.azurecr.io/core/byo-credentials-controller:1.39.13

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.39.13

cert-manager-controller Updated

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-5

cinder-csi-plugin Updated

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-11

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.39.13

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.39.13

csi-attacher Updated

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-4

csi-node-driver-registrar Updated

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-4

csi-provisioner Updated

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-4

csi-resizer Updated

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-4

csi-snapshotter Updated

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-3

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.39.13

frontend Updated

mirantis.azurecr.io/core/frontend:1.39.13

host-os-modules-controller New

mirantis.azurecr.io/core/host-os-modules-controller:1.39.13

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.39.13

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.39.13

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.39.13

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.39.13

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.39.13

livenessprobe Updated

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-4

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.39.13

mcc-haproxy Updated

mirantis.azurecr.io/lcm/mcc-haproxy:v0.24.0-46-gdaf7dbc

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.24.0-46-gdaf7dbc

metrics-server Updated

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-6

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.39.13

openstack-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-12

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.39.13

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.39.13

policy-controller New

mirantis.azurecr.io/core/policy-controller:1.39.13

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.39.13

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.39.13

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.39.13

registry Updated

mirantis.azurecr.io/lcm/registry:v2.8.1-9

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.39.13

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.39.13

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.39.13

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.39.13

vsphere-cloud-controller-manager New

mirantis.azurecr.io/lcm/kubernetes/vsphere-cloud-controller-manager:v1.27.0-5

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.39.13

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.39.13

vsphere-csi-driver New

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-driver:v3.0.2-1

vsphere-csi-syncer New

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-syncer:v3.0.2-1

vsphere-vm-template-controller Updated

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.39.13

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/core/helm/iam-1.39.13.tgz

Docker images

keycloak Removed

n/a

kubectl New

mirantis.azurecr.io/stacklight/kubectl:1.22-20240105023016

kubernetes-entrypoint Updated

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-55b02f7-20231019172556

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20231127070342

mcc-keycloak New

mirantis.azurecr.io/iam/mcc-keycloak:23.0.3-1

Security notes

The table below includes the total numbers of addressed unique and common vulnerabilities and exposures (CVE) by product component since the 2.25.4 patch release. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

0

2

2

Common

0

6

6

Kaas core

Unique

0

7

7

Common

0

8

8

StackLight

Unique

3

7

10

Common

5

19

24

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 24.1: Security notes.

Update notes

This section describes the specific actions you as a cloud operator need to complete before or after your Container Cloud cluster update to the Cluster releases 17.1.0 or 16.1.0.

Consider this information as a supplement to the generic update procedures published in Operations Guide: Automatic upgrade of a management cluster and Update a managed cluster.

Pre-update actions
Unblock cluster update by removing any pinned product artifacts

If any pinned product artifacts are present in the Cluster object of a management or managed cluster, the update will be blocked by the Admission Controller with the invalid HelmReleases configuration error until such artifacts are removed. The update process does not start and any changes in the Cluster object are blocked by the Admission Controller except the removal of fields with pinned product artifacts.

Therefore, verify that the following sections of the Cluster objects do not contain any image-related (tag, name, pullPolicy, repository) and global values inside Helm releases:

  • .spec.providerSpec.value.helmReleases

  • .spec.providerSpec.value.kaas.management.helmReleases

  • .spec.providerSpec.value.regionalHelmReleases

  • .spec.providerSpec.value.regional

For example, a cluster configuration that contains the following highlighted lines will be blocked until you remove them:

- name: kaas-ipam
          values:
            kaas_ipam:
              image:
                tag: base-focal-20230127092754
              exampleKey: exampleValue
- name: kaas-ipam
          values:
            global:
              anyKey: anyValue
            kaas_ipam:
              image:
                tag: base-focal-20230127092754
              exampleKey: exampleValue

The custom pinned product artifacts are inspected and blocked by the Admission Controller to ensure that Container Cloud clusters remain consistently updated with the latest security fixes and product improvements

Note

The pre-update inspection applies only to images delivered by Container Cloud that are overwritten. Any custom images unrelated to the product components are not verified and do not block cluster update.

Update queries for custom log-based metrics in StackLight

Container Cloud 2.26.0 introduces reorganized and significantly improved StackLight logging pipeline. It involves changes in queries implemented in the scope of the logging.metricQueries feature designed for creation of custom log-based metrics. For the procedure, see StackLight operations: Create logs-based metrics.

If you already have some custom log-based metrics:

  1. Before the cluster update, save existing queries.

  2. After the cluster update, update the queries according to the changes implemented in the scope of the logging.metricQueries feature.

These steps prevent failures of queries containing fields that are renamed or removed in Container Cloud 2.26.0.

Post-update actions
Update bird configuration on BGP-enabled bare metal clusters

Container Cloud 2.26.0 introduces the bird daemon update from v1.6.8 to v2.0.7 on master nodes if BGP is used for BGP announcement of the cluster API load balancer address.

Configuration files for bird v1.x are not fully compatible with those for bird v2.x. Therefore, if you used BGP announcement of cluster API LB address on a deployment based on Cluster releases 17.0.0 or 16.0.0, update bird configuration files to fit bird v2.x using configuration examples provided in the API Reference: MultirRackCluster section.

Review and adjust the storage parameters for OpenSearch

To prevent underused or overused storage space, review your storage space parameters for OpenSearch on the StackLight cluster:

  1. Review the value of elasticsearch.persistentVolumeClaimSize and the real storage available on volumes.

  2. Decide whether you have to additionally set elasticsearch.persistentVolumeUsableStorageSizeGB.

For both parameters description, see MOSK Operations Guide: StackLight configuration parameters - OpenSearch.

2.25.4

The Container Cloud patch release 2.25.4, which is based on the 2.25.0 major release, provides the following updates:

  • Support for the patch Cluster releases 16.0.4 and 17.0.4 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 23.3.4.

  • Security fixes for CVEs in images.

This patch release also supports the latest major Cluster releases 17.0.0 and 16.0.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster release instead.

For main deliverables of the parent Container Cloud release of 2.25.4, refer to 2.25.0.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.25.4. For artifacts of the Cluster releases introduced in 2.25.4, see patch Cluster releases 17.0.4 and 16.0.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20231012141354

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20231012141354

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-113-4f8b843.tgz

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.38.33.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.38.33.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.38.33.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.38.33.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.38.33.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.38.33.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.38.33.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.38.33

baremetal-dnsmasq

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-25-alpine-20231128145936

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-2-25-alpine-20231204121500

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-2-25-alpine-20231121115652

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.38.33

ironic

mirantis.azurecr.io/openstack/ironic:yoga-jammy-20231204153029

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:yoga-jammy-20231204153029

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20231204142028

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-2-25-alpine-20231121164200

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-55b02f7-20231019172556

mariadb

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20231127070342

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.23.0-88-g35be0fc

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.13.9-ef4faae9-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.13.9-ef4faae9-amd64

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20231121121917

Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.38.33.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.38.33.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.38.33.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.38.33.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.38.33.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.38.33.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.38.33.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.38.33.tgz

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.38.33.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.38.33.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.38.33.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.38.33.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.38.33.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.38.33.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.38.33.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.38.33.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.38.33.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.38.33.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.38.33.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.38.33.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.38.33.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.38.33.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.38.33.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.38.33.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.38.33.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.38.33.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.38.33.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.38.33.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.38.33.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.38.33.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.38.33.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.38.33.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.38.33.tgz

vsphere-cloud-controller-manager

https://binary.mirantis.com/core/helm/vsphere-cloud-controller-manager-1.38.33.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.38.33.tgz

vsphere-csi-plugin

https://binary.mirantis.com/core/helm/vsphere-csi-plugin-1.38.33.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.38.33.tgz

vsphere-vm-template-controller

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.38.33.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.38.33

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.38.33

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.38.33

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.38.33

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.38.33

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-5

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-11

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.38.33

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.38.33

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-4

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-4

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-4

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-4

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-3

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.38.33

frontend Updated

mirantis.azurecr.io/core/frontend:1.38.33

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.38.33

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.38.33

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.38.33

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.38.33

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.38.33

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-4

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.38.33

mcc-haproxy Updated

mirantis.azurecr.io/lcm/mcc-haproxy:v0.23.0-88-g35be0fc

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.23.0-88-g35be0fc

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-6

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.38.33

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-12

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.38.33

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.38.33

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.38.33

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.38.33

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.38.33

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-7

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.38.33

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.38.33

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.38.33

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.38.33

vsphere-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/vsphere-cloud-controller-manager:v1.27.0-5

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.38.33

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.38.33

vsphere-csi-driver

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-driver:v3.0.2-1

vsphere-csi-syncer

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-syncer:v3.0.2-1

vsphere-vm-template-controller Updated

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.38.33

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/iam/helm/iam-2.6.4.tgz

Docker images

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20231208023019

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-55b02f7-20231019172556

mariadb

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20231127070342

mcc-keycloak

mirantis.azurecr.io/iam/mcc-keycloak:22.0.5-1

Security notes

The table below includes the total numbers of addressed unique and common CVEs in images by product component since the Container Cloud 2.25.3 patch release. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

0

1

1

Common

0

5

5

Kaas core

Unique

0

1

1

Common

0

1

1

StackLight

Unique

0

3

3

Common

0

9

9

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 23.3.4: Security notes.

Addressed issues

The following issues have been addressed in the Container Cloud patch release 2.25.4 along with the patch Cluster releases 17.0.4 and 16.0.4.

  • [38259] Fixed the issue causing the failure to attach an existing MKE cluster to a Container Cloud management cluster. The issue was related to byo-provider and prevented the attachment of MKE clusters having less than three manager nodes and two worker nodes.

  • [38399] Fixed the issue causing the failure to deploy a management cluster in the offline mode due to the issue in the setup script.

See also

Patch releases

Releases delivered in 2023

This section contains historical information on the unsupported Container Cloud releases delivered in 2023. For the latest supported Container Cloud release, see Container Cloud releases.

Unsupported Container Cloud releases 2023

Version

Release date

Summary

2.25.3

Dec 18, 2023

Container Cloud 2.25.3 is the third patch release of the 2.25.x and MOSK 23.3.x release series that introduces the following updates:

  • Support for MKE 3.7.3

  • Patch Cluster release 17.0.3 for MOSK 23.3.3

  • Patch Cluster release 16.0.3

  • Security fixes for CVEs in images

2.25.2

Dec 05, 2023

Container Cloud 2.25.2 is the second patch release of the 2.25.x and MOSK 23.3.x release series that introduces the following updates:

  • Support for attachment of non Container Cloud based MKE clusters to vSphere-based management clusters

  • Patch Cluster release 17.0.2 for MOSK 23.3.2

  • Patch Cluster release 16.0.2

  • Security fixes for CVEs in images

2.25.1

Nov 27, 2023

Container Cloud 2.25.1 is the first patch release of the 2.25.x and MOSK 23.3.x release series that introduces the following updates:

  • MKE:

    • Support for MKE 3.7.2

    • Amendments for MKE configuration managed by Container Cloud

  • vSphere:

    • Switch to an external vSphere cloud controller manager

    • Mandatory MKE upgrade from 3.6 from 3.7

  • StackLight:

    • Kubernetes Network Policies

    • MKE benchmark compliance

  • Patch Cluster release 17.0.1 for MOSK 23.3.1

  • Patch Cluster release 16.0.1

  • Security fixes for CVEs in images

2.25.0

Nov 06, 2023

  • Container Cloud Bootstrap v2

  • Support for MKE 3.7.1 and MCR 23.0.7

  • General availability for RHEL 8.7 on vSphere-based clusters

  • Automatic cleanup of old Ubuntu kernel packages

  • Configuration of a custom OIDC provider for MKE on managed clusters

  • General availability for graceful machine deletion

  • Bare metal provider:

    • General availability for MetalLBConfigTemplate and MetalLBConfig objects

    • Manual IP address allocation for bare metal hosts during PXE provisioning

  • Ceph:

    • Addressing storage devices using by-id identifiers

    • Verbose Ceph cluster status in the KaaSCephCluster.status specification

    • Detailed view of a Ceph cluster summary in web UI

  • StackLight:

    • Fluentd log forwarding to Splunk

    • Ceph monitoring improvements

    • Optimization of StackLight NodeDown alerts

    • OpenSearch performance optimization

    • Documentation: Export data from Table panels of Grafana dashboards to CSV

  • Container Cloud web UI:

    • Status of infrastructure health for bare metal and OpenStack providers

    • Parallel update of worker nodes

    • Graceful machine deletion

2.24.5

Sep 26, 2023

Container Cloud 2.24.4 is the third patch release of the 2.24.x and MOSK 23.2.x release series that introduces the following updates:

  • Patch Cluster release 15.0.4 for MOSK 23.2.3

  • Patch Cluster release 14.0.4

  • Security fixes for CVEs of Critical and High severity

2.24.4

Sep 14, 2023

Container Cloud 2.24.4 is the second patch release of the 2.24.x and MOSK 23.2.x release series that introduces the following updates:

  • Patch Cluster release 15.0.3 for MOSK 23.2.2

  • Patch Cluster release 14.0.3

  • Multi-rack topology for bare metal managed clusters

  • Configuration of the etcd storage quota

  • Security fixes for CVEs of Critical and High severity

2.24.3

Aug 29, 2023

Container Cloud 2.24.3 is the first patch release of the 2.24.x and MOSK 23.2.x release series that introduces the following updates:

  • Patch Cluster release 15.0.2 for MOSK 23.2.1

  • Patch Cluster release 14.0.2

  • Support for MKE 3.6.6 and updated docker-ee-cli 20.10.18 for MCR 20.10.17

  • GA for TLS certificates configuration

  • Security fixes for CVEs of High severity

  • End of support for new deployments on deprecated major or patch Cluster releases

For details, see Patch releases.

2.24.2

Aug 21, 2023

Based on 2.24.1, Container Cloud 2.24.2:

  • Introduces the major Cluster release 15.0.1 that is based on 14.0.1 and supports Mirantis OpenStack for Kubernetes (MOSK) 23.2.

  • Supports the Cluster release 14.0.1. The deprecated Cluster release 14.0.0 and the 12.7.x along with 11.7.x series are not supported for new deployments.

  • Contains features and amendments of the parent releases 2.24.0 and 2.24.1.

2.24.1

Jul 27, 2023

Patch release containing hot fixes for the major Container Cloud release 2.24.0.

2.24.0

Jul 20, 2023

  • Support for MKE 3.6.5 and MCR 20.10.17

  • Bare metal:

    • Automated upgrade of operating system on management and regional clusters

    • Support for WireGuard

    • Configuration of MTU size for Calico

    • MetalLB configuration changes

  • vSphere:

    • Support for RHEL 8.7

    • MetalLB configuration changes

  • OpenStack:

    • Custom flavors for Octavia

    • Deletion of persistent volumes during a cluster deletion

  • IAM:

    • Support for Keycloak Quarkus

    • The admin role for management cluster

  • Security:

    • Support for auditd

    • General availability for TLS certificates configuration

  • LCM:

    • Custom host names for cluster machines

    • Cache warm-up for managed clusters

  • Ceph:

    • Automatic upgrade of Ceph from Pacific to Quincy

    • Ceph non-admin client for a shared Ceph cluster

    • Dropping of redundant components from management and regional clusters

    • Documentation enhancements for Ceph OSDs

  • StackLight:

    • Major version update of OpenSearch and OpenSearch Dashboards from 1.3.7 to 2.7.0

    • Monitoring of network connectivity between Ceph nodes

    • Improvements to StackLight alerting

    • Performance tuning of Grafana dashboards

    • Dropped and white-listed metrics

  • Container Cloud web UI:

    • Graceful cluster reboot

    • Creation and deletion of bare metal host credentials

    • Node labeling improvements

2.23.5

June 05, 2023

Container Cloud 2.23.5 is the fourth patch release of the 2.23.0 and 2.23.1 major releases that:

  • Contains security fixes for critical and high CVEs

  • Introduces the patch Cluster release 12.7.4 for MOSK 23.1.4

  • Introduces the patch Cluster release 11.7.4

  • Supports all major Cluster releases introduced in previous 2.23.x releases

  • Does not support new deployments on deprecated major or patch Cluster releases

For details, see Patch releases.

2.23.4

May 22, 2023

Container Cloud 2.23.4 is the third patch release of the 2.23.0 and 2.23.1 major releases that:

  • Contains several addressed issues and security fixes for critical and high CVEs

  • Introduces the patch Cluster release 12.7.3 for MOSK 23.1.3

  • Introduces the patch Cluster release 11.7.3

  • Supports all major Cluster releases introduced in previous 2.23.x releases

  • Does not support new deployments on deprecated major or patch Cluster releases

For details, see Patch releases.

2.23.3

May 04, 2023

Container Cloud 2.23.3 is the second patch release of the 2.23.0 and 2.23.1 major releases that:

  • Contains security fixes for critical and high CVEs

  • Introduces the patch Cluster release 12.7.2 for MOSK 23.1.2

  • Introduces the patch Cluster release 11.7.2

  • Supports all major Cluster releases introduced in previous 2.23.x releases

  • Does not support new deployments on deprecated major or patch Cluster releases

For details, see Patch releases.

2.23.2

Apr 20, 2023

Container Cloud 2.23.2 is the first patch release of the 2.23.0 and 2.23.1 major releases that:

  • Contains security fixes for critical and high CVEs

  • Introduces support for patch Cluster releases 12.7.1 or 11.7.1

  • Supports all major Cluster releases introduced and supported in the previous 2.23.x releases

For details, see Patch releases.

2.23.1

Apr 04, 2023

Based on 2.23.0, Container Cloud 2.23.1:

  • Introduces the Cluster release 12.7.0 that is based on 11.7.0 and supports Mirantis OpenStack for Kubernetes (MOSK) 23.1.

  • Supports the Cluster release 11.7.0. The deprecated Cluster releases 12.5.0 and 11.6.0 are not supported for new deployments.

  • Contains features and amendments of the parent releases 2.23.0 and 2.22.0.

2.23.0

Mar 07, 2023

  • MKE patch release update from 3.5.5 to 3.5.7

  • Automatic upgrade of Ceph from Octopus 15.2.17 to Pacific 16.2.11

  • Graceful cluster reboot using the GracefulRebootRequest CR

  • Readiness fields for Machine and Cluster objects

  • Deletion of persistent volumes during an OpenStack-based cluster deletion

  • Option to disable time sync management

  • Upgrade button for easy cluster update through the web UI

  • Deployment of an Equinix Metal regional cluster with private networking on top of a public management cluster

  • StackLight:

    • HA setup for iam-proxy in StackLight

    • Log forwarding to third-party systems using Fluentd plugins

    • MCC Applications Performance Grafana dashboard

    • PVC configuration for Reference Application

2.22.0

Jan 31, 2023

  • Custom network configuration for Equinix Metal managed clusters

  • Custom TLS certificates for the StackLight iam-proxy endpoints

  • Notification of a required reboot in the status of a bare metal machine

  • Cluster deployment and update history objects

  • Extended logging format for essential management cluster components

  • StackLight:

    • Bond interfaces monitoring

    • Calculation of storage retention time

    • Deployment of cAdvisor as a StackLight component

    • Container Cloud web UI support for Reference Application

  • Ceph:

    • Two Ceph Managers by default for HA

    • General availability of Ceph Shared File System

    • Sharing Ceph between managed clusters or to an attached MKE cluster

2.25.3

The Container Cloud patch release 2.25.3, which is based on the 2.25.0 major release, provides the following updates:

This patch release also supports the latest major Cluster releases 17.0.0 and 16.0.0. And it does not support greenfield deployments based on deprecated Cluster releases. Use the latest available Cluster release instead.

For main deliverables of the parent Container Cloud release of 2.25.3, refer to 2.25.0.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.25.3. For artifacts of the Cluster releases introduced in 2.25.3, see patch Cluster releases 17.0.3 and 16.0.3.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20231012141354

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20231012141354

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-113-4f8b843.tgz

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.38.31.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.38.31.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.38.31.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.38.31.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.38.31.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.38.31.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.38.31.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.38.31

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-25-alpine-20231128145936

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-25-alpine-20231204121500

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-2-25-alpine-20231121115652

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.38.31

ironic Updated

mirantis.azurecr.io/openstack/ironic:yoga-jammy-20231204153029

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:yoga-jammy-20231204153029

ironic-prometheus-exporter Updated

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20231204142028

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-2-25-alpine-20231121164200

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-55b02f7-20231019172556

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20231127070342

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.23.0-87-gc9d7d3b

metallb-controller Updated

mirantis.azurecr.io/bm/metallb/controller:v0.13.9-ef4faae9-amd64

metallb-speaker Updated

mirantis.azurecr.io/bm/metallb/speaker:v0.13.9-ef4faae9-amd64

syslog-ng Updated

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20231121121917

Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.38.31.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.38.31.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.38.31.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.38.31.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.38.31.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.38.31.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.38.31.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.38.31.tgz

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.38.31.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.38.31.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.38.31.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.38.31.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.38.31.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.38.31.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.38.31.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.38.31.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.38.31.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.38.31.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.38.31.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.38.31.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.38.31.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.38.31.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.38.31.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.38.31.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.38.31.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.38.31.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.38.31.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.38.31.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.38.31.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.38.31.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.38.31.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.38.31.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.38.31.tgz

vsphere-cloud-controller-manager

https://binary.mirantis.com/core/helm/vsphere-cloud-controller-manager-1.38.31.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.38.31.tgz

vsphere-csi-plugin

https://binary.mirantis.com/core/helm/vsphere-csi-plugin-1.38.31.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.38.31.tgz

vsphere-vm-template-controller

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.38.31.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.38.31

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.38.31

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.38.31

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.38.31

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.38.31

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-5

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-11

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.38.31

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.38.31

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-4

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-4

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-4

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-4

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-3

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.38.31

frontend Updated

mirantis.azurecr.io/core/frontend:1.38.31

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.38.31

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.38.31

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.38.31

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.38.31

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.38.31

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-4

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.38.31

mcc-haproxy Updated

mirantis.azurecr.io/lcm/mcc-haproxy:v0.23.0-87-gc9d7d3b

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.23.0-87-gc9d7d3b

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-6

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.38.31

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-12

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.38.31

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.38.31

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.38.31

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.38.31

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.38.31

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-7

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.38.31

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.38.31

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.38.31

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.38.31

vsphere-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/vsphere-cloud-controller-manager:v1.27.0-5

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.38.31

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.38.31

vsphere-csi-driver

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-driver:v3.0.2-1

vsphere-csi-syncer

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-syncer:v3.0.2-1

vsphere-vm-template-controller Updated

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.38.31

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/iam/helm/iam-2.6.3.tgz

Docker images

keycloak

n/a (replaced with mcc-keycloak)

kubectl New

mirantis.azurecr.io/stacklight/kubectl:1.22-20231201023019

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-55b02f7-20231019172556

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20231127070342

mcc-keycloak New

mirantis.azurecr.io/iam/mcc-keycloak:22.0.5-1

Security notes

The table below includes the total numbers of addressed unique and common CVEs in images by product component since the Container Cloud 2.25.2 patch release. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Ceph

Unique

0

1

1

Common

0

3

3

KaaS core

Unique

2

9

11

Common

3

18

21

StackLight

Unique

1

18

19

Common

1

52

53

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 23.3.3: Security notes.

Addressed issues

The following issues have been addressed in the Container Cloud patch release 2.25.3 along with the patch Cluster releases 17.0.3 and 16.0.3.

  • [37634][OpenStack] Fixed the issue with a management or managed cluster deployment or upgrade being blocked by all pods being stuck in the Pending state due to incorrect secrets being used to initialize the OpenStack external Cloud Provider Interface.

  • [37766][IAM] Fixed the issue with sign-in to the MKE web UI of the management cluster using the Sign in with External Provider option, which failed with the invalid parameter: redirect_uri error.

See also

Patch releases

2.25.2

The Container Cloud patch release 2.25.2, which is based on the 2.25.0 major release, provides the following updates:

  • Renewed support for attachment of MKE clusters that are not originally deployed by Container Cloud for vSphere-based management clusters.

  • Support for the patch Cluster releases 16.0.2 and 17.0.2 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 23.3.2.

  • Security fixes for CVEs in images.

This patch release also supports the latest major Cluster releases 17.0.0 and 16.0.0. And it does not support greenfield deployments based on deprecated Cluster releases 14.0.1, 15.0.1, 16.0.1, and 17.0.1. Use the latest available Cluster releases instead.

For main deliverables of the parent Container Cloud release of 2.25.2, refer to 2.25.0.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.25.2. For artifacts of the Cluster releases introduced in 2.25.2, see patch Cluster releases 17.0.2 and 16.0.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20231012141354

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20231012141354

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-113-4f8b843.tgz

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.38.29.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.38.29.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.38.29.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.38.29.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.38.29.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.38.29.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.38.29.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.38.29

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-2-25-alpine-20231121112823

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-2-25-alpine-20231121112816

bm-collective Updated

mirantis.azurecr.io/bm/bm-collective:base-2-25-alpine-20231121115652

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.38.29

ironic Updated

mirantis.azurecr.io/openstack/ironic:yoga-jammy-20231120060019

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:yoga-jammy-20231030060018

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20230912104602

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-2-25-alpine-20231121164200

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-55b02f7-20231019172556

mariadb

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20231024091216

mcc-keepalived

mirantis.azurecr.io/docker.mirantis.net/lcm/mcc-keepalived:v0.23.0-84-g8d74d7c

metallb-controller Updated

mirantis.azurecr.io/bm/metallb/controller:v0.13.9-ef4faae9-amd64

metallb-speaker Updated

mirantis.azurecr.io/bm/metallb/speaker:v0.13.9-ef4faae9-amd64

syslog-ng Updated

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20231121121917

Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.38.29.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.38.29.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.38.29.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.38.29.tgz

byo-credentials-controller New

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.38.29.tgz

byo-provider New

https://binary.mirantis.com/core/helm/byo-provider-1.38.29.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.38.29.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.38.29.tgz

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.38.29.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.38.29.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.38.29.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.38.29.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.38.29.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.38.29.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.38.29.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.38.29.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.38.29.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.38.29.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.38.29.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.38.29.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.38.29.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.38.29.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.38.29.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.38.29.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.38.29.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.38.29.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.38.29.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.38.29.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.38.29.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.38.29.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.38.29.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.38.29.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.38.29.tgz

vsphere-cloud-controller-manager

https://binary.mirantis.com/core/helm/vsphere-cloud-controller-manager-1.38.29.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.38.29.tgz

vsphere-csi-plugin

https://binary.mirantis.com/core/helm/vsphere-csi-plugin-1.38.29.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.38.29.tgz

vsphere-vm-template-controller

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.38.29.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.38.29

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.38.29

byo-credentials-controller New

mirantis.azurecr.io/core/byo-credentials-controller:1.38.29

byo-provider New

mirantis.azurecr.io/core/byo-provider:1.38.29

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.38.29

cert-manager-controller Updated

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-5

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-11

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.38.29

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.38.29

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-4

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-4

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-4

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-4

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-3

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.38.29

frontend Updated

mirantis.azurecr.io/core/frontend:1.38.29

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.38.29

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.38.29

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.38.29

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.38.29

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.38.29

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-4

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.38.29

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.23.0-84-g8d74d7c

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.23.0-84-g8d74d7c

metrics-server Updated

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-6

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.38.29

openstack-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-12

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.38.29

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.38.29

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.38.29

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.38.29

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.38.29

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-7

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.38.29

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.38.29

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.38.29

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.38.29

vsphere-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/vsphere-cloud-controller-manager:v1.27.0-5

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.38.29

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.38.29

vsphere-csi-driver Updated

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-driver:v3.0.2-1

vsphere-csi-syncer Updated

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-syncer:v3.0.2-1

vsphere-vm-template-controller Updated

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.38.29

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts

iam

https://binary.mirantis.com/iam/helm/iam-2.5.10.tgz

Docker images

keycloak

mirantis.azurecr.io/iam/keycloak:0.6.0-1

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-55b02f7-20231019172556

mariadb

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20231024091216

Security notes

The table below includes the total numbers of addressed unique and common CVEs in images by product component since the Container Cloud 2.25.1 patch release. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Product component

CVE type

Critical

High

Total

Kaas core

Unique

0

6

6

Common

0

20

20

Ceph

Unique

0

2

2

Common

0

6

6

StackLight

Unique

0

16

16

Common

0

70

70

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 23.3.2: Security notes.

See also

Patch releases

2.25.1

The Container Cloud patch release 2.25.1, which is based on the 2.25.0 major release, provides the following updates:

  • Support for the patch Cluster releases 16.0.1 and 17.0.1 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 23.3.1.

  • Several product improvements. For details, see Enhancements.

  • Security fixes for CVEs in images.

This patch release also supports the latest major Cluster releases 17.0.0 and 16.0.0. And it does not support greenfield deployments based on deprecated Cluster releases 14.1.0, 14.0.1, and 15.0.1. Use the latest available Cluster releases instead.

For main deliverables of the parent Container Cloud release of 2.25.1, refer to 2.25.0.

Enhancements

This section outlines new features and enhancements introduced in the Container Cloud patch release 2.25.1 along with Cluster releases 17.0.1 and 16.0.1.

Support for MKE 3.7.2

Introduced support for Mirantis Kubernetes Engine (MKE) 3.7.2 on Container Cloud management and managed clusters. On existing managed clusters, MKE is updated to the latest supported version when you update your cluster to the patch Cluster release 17.0.1 or 16.0.1.

MKE options managed by Container Cloud

To simplify MKE configuration through API, moved management of MKE parameters controlled by Container Cloud from lcm-ansible to lcm-controller. Now, Container Cloud overrides only a set of MKE configuration parameters that are automatically managed by Container Cloud.

Improvements in the MKE benchmark compliance for StackLight

Analyzed and fixed the majority of failed compliance checks in the MKE benchmark compliance for StackLight. The following controls were analyzed:

Control ID

Control description

Analyzed item

5.2.7

Minimize the admission of containers with the NET_RAW capability

Containers with NET_RAW capability

5.2.6

Minimize the admission of root containers

  • Containers permitting root

  • Containers with the RunAsUser root or root not set

  • Containers with the SYS_ADMIN capability

  • Container UID is a range of hosts

Kubernetes network policies in StackLight

Introduced Kubernetes network policies for all StackLight components. The feature is implemented using the networkPolicies parameter that is enabled by default.

The Kubernetes NetworkPolicy resource allows controlling network connections to and from Pods within a cluster. This enhances security by restricting communication from compromised Pod applications and provides transparency into how applications communicate with each other.

External vSphere CCM with CSI supporting vSphere 6.7 on Kubernetes 1.27

Switched to the external vSphere cloud controller manager (CCM) that uses vSphere Container Storage Plug-in 3.0 for volume attachment. The feature implementation implies an automatic migration of PersistentVolume and PersistentVolumeClaim.

The external vSphere CCM supports vSphere 6.7 on Kubernetes 1.27 as compared to the in-tree vSphere CCM that does not support vSphere 6.7 since Kubernetes 1.25.

Important

The major Cluster release 14.1.0 is the last Cluster release for the vSphere provider based on MCR 20.10 and MKE 3.6.6 with Kubernetes 1.24. Therefore, Mirantis highly recommends updating your existing vSphere-based managed clusters to the Cluster release 16.0.1 that contains newer versions on MCR and MKE with Kubernetes. Otherwise, your management cluster upgrade to Container Cloud 2.25.2 will blocked.

For the update procedure, refer to Operations Guide: Update a patch Cluster release of a managed cluster.

Since Container Cloud 2.25.1, the major Cluster release 14.1.0 is deprecated. Greenfield vSphere-based deployments on this Cluster release are not supported. Use the patch Cluster release 16.0.1 for new deployments instead.

Artifacts

This section lists the artifacts of components included in the Container Cloud patch release 2.25.1. For artifacts of the Cluster releases introduced in 2.25.1, see patch Cluster releases 17.0.1 and 16.0.1.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries Updated

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20231012141354

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20231012141354

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-113-4f8b843.tgz

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.38.22.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.38.22.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.38.22.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.38.22.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.38.22.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.38.22.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.38.22.tgz

Docker images Updated

ambasador

mirantis.azurecr.io/core/external/nginx:1.38.22

baremetal-dnsmasq

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-alpine-20231030180650

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-alpine-20231101201729

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-alpine-20231027135748

cluster-api-provider-baremetal

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.38.22

ironic

mirantis.azurecr.io/openstack/ironic:yoga-jammy-20231030060018

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:yoga-jammy-20231030060018

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20230912104602

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-alpine-20231027151726

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-55b02f7-20231019172556

mariadb

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20231024091216

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.23.0-84-g8d74d7c

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.13.9-fd3b03b0-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.13.9-fd3b03b0-amd64

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-apline-20231030181839

Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com//core/binbootstrap-darwin-1.38.22.tgz

bootstrap-linux

https://binary.mirantis.com//core/binbootstrap-linux-1.38.22.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.38.22.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.38.22.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.38.22.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.38.22.tgz

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.38.22.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.38.22.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.38.22.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.38.22.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.38.22.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.38.22.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.38.22.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.38.22.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.38.22.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.38.22.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.38.22.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.38.22.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.38.22.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.38.22.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.38.22.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.38.22.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.38.22.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.38.22.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.38.22.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.38.22.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.38.22.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.38.22.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.38.22.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.38.22.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.38.22.tgz

vsphere-cloud-controller-manager New

https://binary.mirantis.com/core/helm/vsphere-cloud-controller-manager-1.38.22.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.38.22.tgz

vsphere-csi-plugin New

https://binary.mirantis.com/core/helm/vsphere-csi-plugin-1.38.22.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.38.22.tgz

vsphere-vm-template-controller

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.38.22.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.38.22

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.38.22

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.38.22

cert-manager-controller Updated

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-4

cinder-csi-plugin Updated

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-11

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.38.22

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.38.22

csi-attacher Updated

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-4

csi-node-driver-registrar Updated

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-4

csi-provisioner Updated

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-4

csi-resizer Updated

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-4

csi-snapshotter Updated

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-3

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.38.22

frontend Updated

mirantis.azurecr.io/core/frontend:1.38.22

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.38.22

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.38.22

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.38.22

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.38.22

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.38.22

livenessprobe Updated

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-4

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.38.22

mcc-haproxy Updated

mirantis.azurecr.io/lcm/mcc-haproxy:v0.23.0-84-g8d74d7c

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.23.0-84-g8d74d7c

metrics-server Updated

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-4

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.38.22

openstack-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-11

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.38.22

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.38.22

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.38.22

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.38.22

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.38.22

registry Updated

mirantis.azurecr.io/lcm/registry:v2.8.1-7

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.38.22

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.38.22

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.38.22

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.38.22

vsphere-cloud-controller-manager New

mirantis.azurecr.io/lcm/kubernetes/vsphere-cloud-controller-manager:v1.27.0-4

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.38.22

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.38.22

vsphere-csi-driver New

mirantis.azurecr.io/core/external/vsphere-csi-driver:v3.0.2

vsphere-csi-syncer New

mirantis.azurecr.io/core/external/vsphere-csi-syncer:v3.0.2

vsphere-vm-template-controller Updated

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.38.22

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/iam/helm/iam-2.5.10.tgz

Docker images Updated

keycloak

mirantis.azurecr.io/iam/keycloak:0.6.0-1

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-55b02f7-20231019172556

mariadb

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20231024091216

Security notes

The table below includes the total numbers of addressed unique and common CVEs in images by product component since the Container Cloud 2.25.0 major release. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Container Cloud component

CVE type

Critical

High

Total

Kaas core

Unique

0

12

12

Common

0

280

280

Ceph

Unique

0

8

8

Common

0

41

41

StackLight

Unique

4

33

37

Common

18

130

148

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 23.3.1: Security notes.

Addressed issues

The following issues have been addressed in the Container Cloud patch release 2.25.1 along with the patch Cluster releases 17.0.1 and 16.0.1.

  • [35426] [StackLight] Fixed the issue with the prometheus-libvirt-exporter Pod failing to reconnect to libvirt after the libvirt Pod recovery from a failure.

  • [35339] [LCM] Fixed the issue with the LCM Ansible task of copying kubectl from the ucp-hyperkube image failing if kubectl exec is in use, for example, during a management cluster upgrade.

  • [35089] [bare metal, Calico] Fixed the issue with arbitrary Kubernetes pods getting stuck in an error loop due to a failed Calico networking setup for that pod.

  • [33936] [bare metal, Calico] Fixed the issue with deletion failure of a controller node during machine replacement due to the upstream Calico issue.

See also

Patch releases

2.25.0

The Mirantis Container Cloud major release 2.25.0:

  • Introduces support for the Cluster release 17.0.0 that is based on the Cluster release 16.0.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 23.3.

  • Introduces support for the Cluster release 16.0.0 that is based on Mirantis Container Runtime (MCR) 23.0.7 and Mirantis Kubernetes Engine (MKE) 3.7.1 with Kubernetes 1.27.

  • Introduces support for the Cluster release 14.1.0 that is dedicated for the vSphere provider only. This is the last Cluster release for the vSphere provider based on MKE 3.6.6 with Kubernetes 1.24.

  • Does not support greenfield deployments on deprecated Cluster releases of the 15.x and 14.x series. Use the latest available Cluster releases of the series instead.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.25.0.

Enhancements

This section outlines new features and enhancements introduced in the Container Cloud release 2.25.0. For the list of enhancements delivered with the Cluster releases introduced by Container Cloud 2.25.0, see 17.0.0, 16.0.0, and 14.1.0.

Container Cloud Bootstrap v2

Implemented Container Cloud Bootstrap v2 that provides an exceptional user experience to set up Container Cloud. With Bootstrap v2, you also gain access to a comprehensive and user-friendly web UI for the OpenStack and vSphere providers.

Bootstrap v2 empowers you to effortlessly provision management clusters before deployment, while benefiting from a streamlined process that isolates each step. This approach not only simplifies the bootstrap process but also enhances troubleshooting capabilities for addressing any potential intermediate failures.

Note

The Bootstrap web UI support for the bare metal provider will be added in one of the following Container Cloud releases.

General availability for ‘MetalLBConfigTemplate’ and ‘MetalLBConfig’ objects

Completed development of the MetalLB configuration related to address allocation and announcement for load-balanced services using the MetalLBConfigTemplate object for bare metal and the MetalLBConfig object for vSphere. Container Cloud uses these objects in default templates as recommended during creation of a management or managed cluster.

At the same time, removed the possibility to use the deprecated options, such as configInline value of the MetalLB chart and the use of Subnet objects without new MetalLBConfigTemplate and MetalLBConfig objects.

Automated migration, which applied to these deprecated options during creation of clusters of any type or cluster update to Container Cloud 2.24.x, is removed automatically during your management cluster upgrade to Container Cloud 2.25.0. After that, any changes in MetalLB configuration related to address allocation and announcement for load-balanced services will be applied using the MetalLBConfig, MetalLBConfigTemplate, and Subnet objects only.

Manual IP address allocation for bare metal hosts during PXE provisioning

Technology Preview

Implemented the following annotations for bare metal hosts that enable manual allocation of IP addresses during PXE provisioning on managed clusters:

  • host.dnsmasqs.metal3.io/address - assigns a specific IP address to a host

  • baremetalhost.metal3.io/detached - pauses automatic host management

These annotations are helpful if you have a limited amount of free and unused IP addresses for server provisioning. Using these annotations, you can manually create bare metal hosts one by one and provision servers in small, manually managed chunks.

Status of infrastructure health for bare metal and OpenStack providers

Implemented the Infrastructure Status condition to monitor infrastructure readiness in the Container Cloud web UI during cluster deployment for bare metal and OpenStack providers. Readiness of the following components is monitored:

  • Bare metal: the MetalLBConfig object along with MetalLB and DHCP subnets

  • OpenStack: cluster network, routers, load balancers, and Bastion along with their ports and floating IPs

For the bare metal provider, also implemented the Infrastructure Status condition for machines to monitor readiness of the IPAMHost, L2Template, BareMetalHost, and BareMetalHostProfile objects associated with the machine.

General availability for RHEL 8.7 on vSphere-based clusters

Introduced general availability support for RHEL 8.7 on VMware vSphere-based clusters. You can install this operating system on any type of a Container Cloud cluster including the bootstrap node.

Note

RHEL 7.9 is not supported as the operating system for the bootstrap node.

Caution

A Container Cloud cluster based on mixed RHEL versions, such as RHEL 7.9 and 8.7, is not supported.

Automatic cleanup of old Ubuntu kernel packages

Implemented automatic cleanup of old Ubuntu kernel and other unnecessary system packages. During cleanup, Container Cloud keeps two most recent kernel versions, which is the default behavior of the Ubuntu apt autoremove command.

Mirantis recommends keeping two kernel versions with the previous kernel version as a fallback option in the event that the current kernel may become unstable at any time. However, if you absolutely require leaving only the latest version of kernel packages, you can use the cleanup-kernel-packages script after considering all possible risks.

Configuration of a custom OIDC provider for MKE on managed clusters

Implemented the ability to configure a custom OpenID Connect (OIDC) provider for MKE on managed clusters using the ClusterOIDCConfiguration custom resource. Using this resource, you can add your own OIDC provider configuration to authenticate user requests to Kubernetes.

Note

For OpenStack and StackLight, Container Cloud supports only Keycloak, which is configured on the management cluster, as the OIDC provider.

The admin role for management cluster

Implemented the management-admin OIDC role to grant full admin access specifically to a management cluster. This role enables the user to manage Pods and all other resources of the cluster, for example, for debugging purposes.

General availability for graceful machine deletion

Introduced general availability support for graceful machine deletion with a safe cleanup of node resources:

  • Changed the default deletion policy from unsafe to graceful for machine deletion using the Container Cloud API.

    Using the deletionPolicy: graceful parameter in the providerSpec.value section of the Machine object, the cloud provider controller prepares a machine for deletion by cordoning, draining, and removing the related node from Docker Swarm. If required, you can abort a machine deletion when using deletionPolicy: graceful, but only before the related node is removed from Docker Swarm.

  • Implemented the following machine deletion methods in the Container Cloud web UI: Graceful, Unsafe, Forced.

  • Added support for deletion of manager machines, which is intended only for replacement or recovery of failed nodes, for MOSK-based clusters using either of deletion policies mentioned above.

General availability for parallel update of worker nodes

Completed development of the parallel update of worker nodes during cluster update by implementing the ability to configure the required options using the Container Cloud web UI. Parallelizing of node update operations significantly optimizes the update efficiency of large clusters.

The following options are added to the Create Cluster window:

  • Parallel Upgrade Of Worker Machines that sets the maximum number of worker nodes to update simultaneously

  • Parallel Preparation For Upgrade Of Worker Machines that sets the maximum number of worker nodes for which new artifacts are downloaded at a given moment of time

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.25.0 along with the Cluster releases 17.0.0, 16.0.0, and 14.1.0.

Note

This section provides descriptions of issues addressed since the last Container Cloud patch release 2.24.5.

For details on addressed issues in earlier patch releases since 2.24.0, which are also included into the major release 2.25.0, refer to 2.24.x patch releases.

  • [34462] [BM] Fixed the issue with incorrect handling of the DHCP egress traffic by reconfiguring the external traffic policy for the dhcp-lb Kubernetes Service. For details about the issue, refer to the Kubernetes upstream bug.

    On existing clusters with multiple L2 segments using DHCP relays on the border switches, in order to successfully provision new nodes or reprovision existing ones, manually point the DHCP relays on your network infrastructure to the new IP address of the dhcp-lb Service of the Container Cloud cluster.

    To obtain the new IP address:

    kubectl -n kaas get service dhcp-lb
    
  • [35429] [BM] Fixed the issue with the WireGuard interface not having the IPv4 address assigned. The fix implies automatic restart of the calico-node Pod to allocate the IPv4 address on the WireGuard interface.

  • [36131] [BM] Fixed the issue with IpamHost object changes not being propagated to LCMMachine during netplan configuration after cluster deployment.

  • [34657] [LCM] Fixed the issue with iam-keycloak Pods not starting after powering up master nodes and starting the Container Cloud upgrade right after.

  • [34750] [LCM] Fixed the issue with journald generating a lot of log messages that already exist in the auditd log due to enabled systemd-journald-audit.socket.

  • [35738] [StackLight] Fixed the issue with ucp-node-exporter being unable to bind the port 9100 with the ucp-node-exporter failing to start due to a conflict with the StackLight node-exporter binding the same port.

    The resolution of the issue involves an automatic change of the port for the StackLight node-exporter from 9100 to 19100. No manual port update is required.

    If your cluster uses a firewall, add an additional firewall rule that grants the same permissions to port 19100 as those currently assigned to port 9100 on all cluster nodes.

  • [34296] [StackLight] Fixed the issue with the CPU over-consumption by helm-controller leading to the KubeContainersCPUThrottlingHigh alert firing.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.25.0 including the Cluster releases 17.0.0, 16.0.0, and 14.1.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.

Bare metal
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[35089] Calico does not set up networking for a pod

Fixed in 17.0.1 and 16.0.1 for MKE 3.7.2

An arbitrary Kubernetes pod may get stuck in an error loop due to a failed Calico networking setup for that pod. The pod cannot access any network resources. The issue occurs more often during cluster upgrade or node replacement, but this can sometimes happen during the new deployment as well.

You may find the following log for the failed pod IP (for example, 10.233.121.132) in calico-node logs:

felix/route_table.go 898: Syncing routes: found unexpected route; ignoring due to grace period. dest=10.233.121.132/32 ifaceName="cali9731b965838" ifaceRegex="^cali." ipVersion=0x4 tableIndex=254
felix/route_table.go 898: Syncing routes: found unexpected route; ignoring due to grace period. dest=10.233.121.132/32 ifaceName="cali9731b965838" ifaceRegex="^cali." ipVersion=0x4 tableIndex=254
...
felix/route_table.go 902: Remove old route dest=10.233.121.132/32 ifaceName="cali9731b965838" ifaceRegex="^cali.*" ipVersion=0x4 routeProblems=[]string{"unexpected route"} tableIndex=254
felix/conntrack.go 90: Removing conntrack flows ip=10.233.121.132

The workaround is to manually restart the affected pod:

kubectl delete pod <failedPodID>
[33936] Deletion failure of a controller node during machine replacement

Fixed in 17.0.1 and 16.0.1 for MKE 3.7.2

Due to the upstream Calico issue, a controller node cannot be deleted if the calico-node Pod is stuck blocking node deletion. One of the symptoms is the following warning in the baremetal-operator logs:

Resolving dependency Service dhcp-lb in namespace kaas failed: \
the server was unable to return a response in the time allotted,\
but may still be processing the request (get endpoints dhcp-lb).

As a workaround, delete the Pod that is stuck to retrigger the node deletion.

[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.


OpenStack
[37634] Cluster deployment or upgrade is blocked by all pods in ‘Pending’ state

Fixed in 17.0.3 and 16.0.3

When using OpenStackCredential with a custom CACert, a management or managed cluster deployment or upgrade is blocked by all pods being stuck in the Pending state. The issue is caused by incorrect secrets being used to initialize the OpenStack external Cloud Provider Interface.

As a workaround, copy CACert from the OpenStackCredential object to openstack-ca-secret:

kubectl --kubeconfig <pathToFailedClusterKubeconfig> patch secret -n kube-system openstack-ca-secret -p '{"data":{"ca.pem":"'$(kubectl --kubeconfig <pathToManagementClusterKubeconfig> -n <affectedProjectName> get openstackcredentials <credentialsName> -o go-template="{{.spec.CACert}}")'"}}'

If the CACert from the OpenStackCredential is not base64-encoded:

kubectl --kubeconfig <pathToFailedClusterKubeconfig> patch secret -n kube-system openstack-ca-secret -p '{"data":{"ca.pem":"'$(kubectl --kubeconfig <pathToManagementClusterKubeconfig> -n <affectedProjectName> get openstackcredentials <credentialsName> -o go-template="{{.spec.CACert}}" | base64)'"}}'

In either command above, replace the following values:

  • <pathToFailedClusterKubeconfig> is the file path to the affected managed or management cluster kubeconfig.

  • <pathToManagementClusterKubeconfig> is the file path to the Container Cloud management cluster kubeconfig.

  • <affectedProjectName> is the Container Cloud project name containing the cluster with stuck pods. For a management cluster, the value is default.

  • <credentialsName> is the OpenStackCredential name used for the deployment.


IAM
[37766] Sign-in to the MKE web UI fails with ‘invalid parameter: redirect_uri’

Fixed in 17.0.3 and 16.0.3

A sign-in to the MKE web UI of the management cluster using the Sign in with External Provider option can fail with the invalid parameter: redirect_uri error.

Workaround:

  1. Log in to the Keycloak admin console.

  2. In the sidebar menu, switch to the IAM realm.

  3. Navigate to Clients > kaas.

  4. On the page, navigate to Seetings > Access settings > Valid redirect URIs.

  5. Add https://<mgmt mke ip>:6443/* to the list of valid redirect URIs and click Save.

  6. Refresh the browser window with the sign-in URI.


LCM
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

[32761] Node cleanup fails due to remaining devices

Fixed in 17.1.0 and 16.1.0

On MOSK clusters, the Ansible provisioner may hang in a loop while trying to remove LVM thin pool logical volumes (LVs) due to issues with volume detection before removal. The Ansible provisioner cannot remove LVM thin pool LVs correctly, so it consistently detects the same volumes whenever it scans disks, leading to a repetitive cleanup process.

The following symptoms mean that a cluster can be affected:

  • A node was configured to use thin pool LVs. For example, it had the OpenStack Cinder role in the past.

  • A bare metal node deployment flaps between provisioninig and deprovisioning states.

  • In the Ansible provisioner logs, the following example warnings are growing:

    88621.log:7389:2023-06-22 16:30:45.109 88621 ERROR ansible.plugins.callback.ironic_log
    [-] Ansible task clean : fail failed on node 14eb0dbc-c73a-4298-8912-4bb12340ff49:
    {'msg': 'There are more devices to clean', '_ansible_no_log': None, 'changed': False}
    

    Important

    There are more devices to clean is a regular warning indicating some in-progress tasks. But if the number of such warnings is growing along with the node flapping between provisioninig and deprovisioning states, the cluster is highly likely affected by the issue.

As a workaround, erase disks manually using any preferred tool.

[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>

Ceph
[34820] The Ceph ‘rook-operator’ fails to connect to RGW on FIPS nodes

Fixed in 17.1.0 and 16.1.0

Due to the upstream Ceph issue, on clusters with the Federal Information Processing Standard (FIPS) mode enabled, the Ceph rook-operator fails to connect to Ceph RADOS Gateway (RGW) pods.

As a workaround, do not place Ceph RGW pods on nodes where FIPS mode is enabled.

[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.

Update
[37268] Container Cloud upgrade is blocked by a node in ‘Prepare’ or ‘Deploy’ state

Fixed in 17.1.0 and 16.1.0

Container Cloud upgrade may be blocked by a node being stuck in the Prepare or Deploy state with error processing package openssh-server. The issue is caused by customizations in /etc/ssh/sshd_config, such as additional Match statements. This file is managed by Container Cloud and must not be altered manually.

As a workaround, move customizations from sshd_config to a new file in the /etc/ssh/sshd_config.d/ directory.

[36928] The helm-controller Deployment is stuck during cluster update

During a cluster update, a Kubernetes helm-controller Deployment may get stuck in a restarting Pod loop with Terminating and Running states flapping. Other Deployment types may also be affected.

As a workaround, restart the Deployment that got stuck:

kubectl -n <affectedProjectName> get deploy <affectedDeployName> -o yaml

kubectl -n <affectedProjectName> scale deploy <affectedDeployName> --replicas 0

kubectl -n <affectedProjectName> scale deploy <affectedDeployName> --replicas <replicasNumber>

In the command above, replace the following values:

  • <affectedProjectName> is the Container Cloud project name containing the cluster with stuck Pods

  • <affectedDeployName> is the Deployment name that failed to run Pods in the specified project

  • <replicasNumber> is the original number of replicas for the Deployment that you can obtain using the get deploy command

[33438] ‘CalicoDataplaneFailuresHigh’ alert is firing during cluster update

During cluster update of a managed bare metal cluster, the false positive CalicoDataplaneFailuresHigh alert may be firing. Disregard this alert, which will disappear once cluster update succeeds.

The observed behavior is typical for calico-node during upgrades, as workload changes occur frequently. Consequently, there is a possibility of temporary desynchronization in the Calico dataplane. This can occasionally result in throttling when applying workload changes to the Calico dataplane.

Components versions

The following table lists the major components and their versions delivered in the Container Cloud 2.25.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

Bare metal Updated

ambasador

1.38.17

baremetal-dnsmasq

base-alpine-20231013162346

baremetal-operator

base-alpine-20231101201729

baremetal-provider

1.38.17

bm-collective

base-alpine-20230929115341

cluster-api-provider-baremetal

1.38.17

ironic

yoga-jammy-20230914091512

ironic-inspector

yoga-jammy-20230914091512

ironic-prometheus-exporter

0.1-20230912104602

kaas-ipam

base-alpine-20230911165405

kubernetes-entrypoint

1.0.1-27d64fb-20230421151539

mariadb

10.6.14-focal-20230912121635

metallb-controller

0.13.9-0d8e8043-amd64

metallb-speaker

0.13.9-0d8e8043-amd64

syslog-ng

base-apline-20230914091214

IAM

iam Updated

2.5.8

iam-controller Updated

1.38.17

keycloak

21.1.1

Container Cloud

admission-controller Updated

1.38.17

agent-controller Updated

1.38.17

ceph-kcc-controller Updated

1.38.17

cert-manager-controller

1.11.0-2

cinder-csi-plugin New

1.27.2-8

client-certificate-controller Updated

1.38.17

configuration-collector New

1.38.17

csi-attacher New

4.2.0-2

csi-node-driver-registrar New

2.7.0-2

csi-provisioner New

3.4.1-2

csi-resizer New

1.7.0-2

csi-snapshotter New

6.2.1-mcc-1

event-controller Updated

1.38.17

frontend Updated

1.38.17

golang

1.20.4-alpine3.17

iam-controller Updated

1.38.17

kaas-exporter Updated

1.38.17

kproxy Updated

1.38.17

lcm-controller Updated

1.38.17

license-controller Updated

1.38.17

livenessprobe New

2.9.0-2

machinepool-controller Updated

1.38.17

mcc-haproxy

0.23.0-73-g01aa9b3

metrics-server

0.6.3-2

nginx Updated

1.38.17

portforward-controller Updated

1.38.17

proxy-controller Updated

1.38.17

rbac-controller Updated

1.38.17

registry

2.8.1-5

release-controller Updated

1.38.17

rhellicense-controller Updated

1.38.17

scope-controller Updated

1.38.17

storage-discovery Updated

1.38.17

user-controller Updated

1.38.17

OpenStack Updated

openstack-cloud-controller-manager

1.27.2-8

openstack-cluster-api-controller

1.38.17

openstack-provider

1.38.17

os-credentials-controller

1.38.17

VMware vSphere

mcc-keepalived Updated

0.23.0-73-g01aa9b3

squid-proxy

0.0.1-10-g24a0d69

vsphere-cluster-api-controller Updated

1.38.17

vsphere-credentials-controller Updated

1.38.17

vsphere-provider Updated

1.38.17

vsphere-vm-template-controller Updated

1.38.17

Artifacts

This section lists the artifacts of components included in the Container Cloud release 2.25.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries Updated

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20231012141354

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20231012141354

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-113-4f8b843.tgz

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.38.17.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.38.17.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.38.17.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.38.17.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.38.17.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.38.17.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.38.17.tgz

Docker images Updated

ambasador

mirantis.azurecr.io/core/external/nginx:1.38.17

baremetal-dnsmasq

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-alpine-20231013162346

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-alpine-20231101201729

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-alpine-20230929115341

cluster-api-provider-baremetal

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.38.17

ironic

mirantis.azurecr.io/openstack/ironic:yoga-jammy-20230914091512

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:yoga-jammy-20230914091512

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20230912104602

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-alpine-20230911165405

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-27d64fb-20230421151539

mariadb

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20230912121635

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.23.0-73-g01aa9b3

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.13.9-0d8e8043-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.13.9-0d8e8043-amd64

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-apline-20230914091214

Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.38.17.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.38.17.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.38.17.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.38.17.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.38.17.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.38.17.tgz

cinder-csi-plugin New

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.38.17.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.38.17.tgz

configuration-collector New

https://binary.mirantis.com/core/helm/configuration-collector-1.38.17.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.38.17.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.38.17.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.38.17.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.38.17.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.38.17.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.38.17.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.38.17.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.38.17.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.38.17.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.38.17.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.38.17.tgz

openstack-cloud-controller-manager New

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.38.17.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.38.17.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.38.17.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.38.17.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.38.17.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.38.17.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.38.17.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.38.17.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.38.17.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.38.17.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.38.17.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.38.17.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.38.17.tgz

vsphere-vm-template-controller

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.38.17.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.38.17

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.38.17

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.38.17

cert-manager-controller Updated

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-2

cinder-csi-plugin New

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-8

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.38.17

configuration-collector New

mirantis.azurecr.io/core/configuration-collector:1.38.17

csi-attacher New

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-2

csi-node-driver-registrar New

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-2

csi-provisioner New

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-2

csi-resizer New

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-2

csi-snapshotter New

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-1

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.38.17

frontend Updated

mirantis.azurecr.io/core/frontend:1.38.17

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.38.17

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.38.17

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.38.17

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.38.17

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.38.17

livenessprobe New

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-2

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.38.17

mcc-haproxy Updated

mirantis.azurecr.io/lcm/mcc-haproxy:v0.23.0-73-g01aa9b3

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.23.0-73-g01aa9b3

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-2

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.38.17

openstack-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-8

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.38.17

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.38.17

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.38.17

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.38.17

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.38.17

registry Updated

mirantis.azurecr.io/lcm/registry:v2.8.1-6

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.38.17

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.38.17

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.38.17

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.38.17

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.38.17

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.38.17

vsphere-vm-template-controller Updated

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.38.17

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts Updated

iam

https://binary.mirantis.com/iam/helm/iam-2.5.8.tgz

Docker images

keycloak

mirantis.azurecr.io/iam/keycloak:0.6.0

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-27d64fb-20230421151539

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20230730124341

Security notes

The table below includes the total numbers of addressed unique and common CVEs by product component since the 2.24.5 patch release. The common CVEs are issues addressed across several images.

Addressed CVEs - summary

Container Cloud component

CVE type

Critical

High

Total

Kaas core

Unique

7

39

46

Common

54

305

359

Ceph

Unique

0

1

1

Common

0

1

1

StackLight

Unique

0

5

5

Common

0

13

13

Mirantis Security Portal

For the detailed list of fixed and existing CVEs across the Mirantis Container Cloud and MOSK products, refer to Mirantis Security Portal.

MOSK CVEs

For the number of fixed CVEs in the MOSK-related components including OpenStack and Tungsten Fabric, refer to MOSK 23.3: Security notes.

Update notes

This section describes the specific actions you as a cloud operator need to complete before or after your Container Cloud cluster update to the Cluster releases 17.0.0, 16.0.0, or 14.1.0.

Consider this information as a supplement to the generic update procedures published in Operations Guide: Automatic upgrade of a management cluster and Update a managed cluster.

Pre-update actions
Upgrade to Ubuntu 20.04 on baremetal-based clusters

The Cluster release series 14.x and 15.x are the last ones where Ubuntu 18.04 is supported on existing clusters. A Cluster release update to 17.0.0 or 16.0.0 is impossible for a cluster running on Ubuntu 18.04.

Therefore, if your cluster update is blocked, make sure that the operating system on all cluster nodes is upgraded to Ubuntu 20.04 as described in Operations Guide: Upgrade an operating system distribution.

Configure managed clusters with the etcd storage quota set

If your cluster has custom etcd storage quota set as described in Increase storage quota for etcd, before the management cluster upgrade to 2.25.0, configure LCMMachine resources:

  1. Manually set the ucp_etcd_storage_quota parameter in LCMMachine resources of the cluster controller nodes:

    spec:
      stateItemsOverwrites:
        deploy:
          ucp_etcd_storage_quota: "<custom_etcd_storage_quota_value>"
    

    If the stateItemsOverwrites.deploy section is already set, append ucp_etcd_storage_quota to the existing parameters.

    To obtain the list of the cluster LCMMachine resources:

    kubectl -n <cluster_namespace> get lcmmachine
    

    To patch the cluster LCMMachine resources of the Type control:

    kubectl -n <cluster_namespace> edit lcmmachine <control_lcmmachine_name>
    
  2. After the management cluster is upgraded to 2.25.0, update your managed cluster to the Cluster release 17.0.0 or 16.0.0.

  3. Manually remove the ucp_etcd_storage_quota parameter from the stateItemsOverwrites.deploy section.

Allow the TCP port 12392 for management cluster nodes

The Cluster release 16.x and 17.x series are shipped with MKE 3.7.x. To ensure cluster operability after the update, verify that the TCP port 12392 is allowed in your network for the Container Cloud management cluster nodes.

For the full list of the required ports for MKE, refer to MKE Documentation: Open ports to incoming traffic.

Post-update actions
Migrate Ceph cluster to address storage devices using by-id

Container Cloud uses the device by-id identifier as the default method of addressing the underlying devices of Ceph OSDs. This is the only persistent device identifier for a Ceph cluster that remains stable after cluster upgrade or any other cluster maintenance.

Therefore, if your existing Ceph clusters are still utilizing the device names or device by-path symlinks, migrate them to the by-id format as described in Migrate Ceph cluster to address storage devices using by-id.

Point DHCP relays on routers to the new dhcp-lb IP address

If your managed cluster has multiple L2 segments using DHCP relays on the border switches, after the related management cluster automatically upgrades to Container Cloud 2.25.0, manually point the DHCP relays on your network infrastructure to the new IP address of the dhcp-lb service of the Container Cloud managed cluster in order to successfully provision new nodes or reprovision existing ones.

To obtain the new IP address:

kubectl -n kaas get service dhcp-lb

This change is required after the product has included the resolution of the issue related to the incorrect handling of DHCP egress traffic. The fix involves reconfiguring the external traffic policy for the dhcp-lb Kubernetes Service. For details about the issue, refer to the Kubernetes upstream bug.

2.24.5

The Container Cloud patch release 2.24.5, which is based on the 2.24.2 major release, provides the following updates:

  • Support for the patch Cluster releases 14.0.4 and 15.0.4 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 23.2.3.

  • Security fixes for CVEs of Critical and High severity

This patch release also supports the latest major Cluster releases 14.0.1 and 15.0.1. And it does not support greenfield deployments based on deprecated Cluster releases 15.0.3, 15.0.2, 14.0.3, 14.0.2 along with 12.7.x and 11.7.x series. Use the latest available Cluster releases for new deployments instead.

For main deliverables of the parent Container Cloud releases of 2.24.5, refer to 2.24.0 and 2.24.1.

Artifacts

This section lists the components artifacts of the Container Cloud patch release 2.24.5. For artifacts of the Cluster releases introduced in 2.24.5, see patch Cluster releases 15.0.4 and 14.0.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts
Bare metal artifacts

Artifact

Component

Path

Binaries

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20230606121129

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20230606121129

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-104-6e2e82c.tgz

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.37.25.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.37.25.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.37.25.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.37.25.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.37.25.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.37.25.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.37.25.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.37.25

baremetal-dnsmasq

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-alpine-20230810152159

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-alpine-20230803175048

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-alpine-20230829084517

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.37.25

ironic

mirantis.azurecr.io/openstack/ironic:yoga-focal-20230810113432

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:yoga-focal-20230810113432

ironic-prometheus-exporter Updated

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20230912104602

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-alpine-20230810155639

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-5359171-20230810125608

mariadb

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20230730124341

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.22.0-75-g08569a8

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.13.9-53df4a9c-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.13.9-53df4a9c-amd64

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-apline-20230814110635

Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.37.25.tgz

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.37.25.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.37.25.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.37.25.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.37.25.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.37.25.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.37.25.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.37.25.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.37.25.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.37.25.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.37.25.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.37.25.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.37.25.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.37.25.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.37.25.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.37.25.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.37.25.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.37.25.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.37.25.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.37.25.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.37.25.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.37.25.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.37.25.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.37.25.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.37.25.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.37.25.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.37.25.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.37.25.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.37.25.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.37.25.tgz

vsphere-vm-template-controller

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.37.25.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.37.25

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.37.25

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.37.25

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-2

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.37.25

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.37.25

frontend Updated

mirantis.azurecr.io/core/frontend:1.37.25

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.37.25

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.37.25

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.37.25

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.37.25

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.37.25

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.37.25

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.22.0-75-g08569a8

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.22.0-75-g08569a8

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-2

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.37.25

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager-amd64:v1.24.5-13

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.37.25

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.37.25

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.37.25

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.37.25

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.37.25

registry Updated

mirantis.azurecr.io/lcm/registry:v2.8.1-5

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.37.25

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.37.25

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.37.25

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.37.25

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.37.25

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.37.25

vsphere-vm-template-controller Updated

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.37.25

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts

iam

https://binary.mirantis.com/iam/helm/iam-2.5.4.tgz

Docker images

keycloak

mirantis.azurecr.io/iam/keycloak:0.6.0

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-27d64fb-20230421151539

mariadb

mirantis.azurecr.io/general/mariadb:10.6.12-focal-20230423170220

Security notes

In total, since Container Cloud 2.24.4, in 2.24.5, 21 Common Vulnerabilities and Exposures (CVE) have been fixed: 18 of critical and 3 of high severity.

The summary table contains the total number of unique CVEs along with the total number of issues fixed across the images.

The full list of the CVEs present in the current Container Cloud release is available at the Mirantis Security Portal.

Addressed CVEs - summary

Severity

Critical

High

Total

Unique CVEs

1

1

2

Total issues across images

18

3

21

Addressed CVEs - detailed

Image

Component name

CVE

core/external/nginx

libwebp

CVE-2023-4863 (High)

core/frontend

libwebp

CVE-2023-4863 (High)

lcm/kubernetes/openstack-cloud-controller-manager-amd64

busybox

CVE-2022-48174 (Critical)

busybox-binsh

CVE-2022-48174 (Critical)

ssl_client

CVE-2022-48174 (Critical)

lcm/registry

busybox

CVE-2022-48174 (Critical)

busybox-binsh

CVE-2022-48174 (Critical)

ssl_client

CVE-2022-48174 (Critical)

scale/curl-jq

busybox

CVE-2022-48174 (Critical)

busybox-binsh

CVE-2022-48174 (Critical)

ssl_client

CVE-2022-48174 (Critical)

stacklight/alertmanager-webhook-servicenow

busybox

CVE-2022-48174 (Critical)

busybox-binsh

CVE-2022-48174 (Critical)

ssl_client

CVE-2022-48174 (Critical)

stacklight/grafana-image-renderer

libwebp

CVE-2023-4863 (High)

stacklight/ironic-prometheus-exporter

busybox

CVE-2022-48174 (Critical)

busybox-binsh

CVE-2022-48174 (Critical)

ssl_client

CVE-2022-48174 (Critical)

stacklight/sf-reporter

busybox

CVE-2022-48174 (Critical)

busybox-binsh

CVE-2022-48174 (Critical)

ssl_client

CVE-2022-48174 (Critical)

2.24.4

The Container Cloud patch release 2.24.4, which is based on the 2.24.2 major release, provides the following updates:

  • Support for the patch Cluster releases 14.0.3 and 15.0.3 that represents Mirantis OpenStack for Kubernetes (MOSK) patch release 23.2.2.

  • Support for the multi-rack topology on bare metal managed clusters

  • Support for configuration of the etcd storage quota

  • Security fixes for CVEs of Critical and High severity

This patch release also supports the latest major Cluster releases 14.0.1 and 15.0.1. And it does not support greenfield deployments based on deprecated Cluster releases 15.0.2, 14.0.2, along with 12.7.x and 11.7.x series. Use the latest available Cluster releases for new deployments instead.

For main deliverables of the parent Container Cloud releases of 2.24.4, refer to 2.24.0 and 2.24.1.

Enhancements

This section outlines new features and enhancements introduced in the Container Cloud patch release 2.24.4.

Configuration of the etcd storage quota

Added the capability to configure storage quota, which is 2 GB by default. You may need to increase the default etcd storage quota if etcd runs out of space and there is no other way to clean up the storage on your management or managed cluster.

Multi-rack topology for bare metal managed clusters

TechPreview

Added support for the multi-rack topology on bare metal managed clusters. Implementation of the multi-rack topology implies the use of Rack and MultiRackCluster objects that support configuration of BGP announcement of the cluster API load balancer address.

You can now create a managed cluster where cluster nodes including Kubernetes masters are distributed across multiple racks without L2 layer extension between them, and use BGP for announcement of the cluster API load balancer address and external addresses of Kubernetes load-balanced services.

Artifacts

This section lists the components artifacts of the Container Cloud patch release 2.24.4. For artifacts of the Cluster releases introduced in 2.24.4, see patch Cluster releases 15.0.3 and 14.0.3.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20230606121129

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20230606121129

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-104-6e2e82c.tgz

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.37.24.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.37.24.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.37.24.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.37.24.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.37.24.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.37.24.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.37.24.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.37.24

baremetal-dnsmasq

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-alpine-20230810152159

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-alpine-20230803175048

bm-collective Updated

mirantis.azurecr.io/bm/bm-collective:base-alpine-20230829084517

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.37.24

ironic

mirantis.azurecr.io/openstack/ironic:yoga-focal-20230810113432

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:yoga-focal-20230810113432

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20230531081117

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-alpine-20230810155639

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-5359171-20230810125608

mariadb

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20230730124341

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.22.0-66-ga855169

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.13.9-53df4a9c-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.13.9-53df4a9c-amd64

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-apline-20230814110635

Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com//core/binbootstrap-darwin-1.37.24.tgz

bootstrap-linux

https://binary.mirantis.com//core/binbootstrap-linux-1.37.24.tgz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.37.24.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.37.24.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.37.24.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.37.24.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.37.24.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.37.24.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.37.24.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.37.24.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.37.24.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.37.24.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.37.24.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.37.24.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.37.24.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.37.24.tgz

mcc-cache-warmup

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.37.24.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.37.24.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.37.24.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.37.24.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.37.24.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.37.24.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.37.24.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.37.24.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.37.24.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.37.24.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.37.24.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.37.24.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.37.24.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.37.24.tgz

vsphere-vm-template-controller

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.37.24.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.37.24

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.37.24

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.37.24

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-2

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.37.24

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.37.24

frontend Updated

mirantis.azurecr.io/core/frontend:1.37.24

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.37.24

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.37.24

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.37.24

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.37.24

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.37.24

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.37.24

mcc-haproxy Updated

mirantis.azurecr.io/lcm/mcc-haproxy:v0.22.0-66-ga855169

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.22.0-66-ga855169

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-2

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.37.24

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager-amd64:v1.24.5-10-g93314b86

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.37.24

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.37.24

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.37.24

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.37.24

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.37.24

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-4

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.37.24

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.37.24

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.37.24

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.37.24

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.37.24

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.37.24

vsphere-vm-template-controller Updated

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.37.24

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts

iam

https://binary.mirantis.com/iam/helm/iam-2.5.4.tgz

Docker images

keycloak

mirantis.azurecr.io/iam/keycloak:0.6.0

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-27d64fb-20230421151539

mariadb

mirantis.azurecr.io/general/mariadb:10.6.12-focal-20230423170220

Security notes

In total, since Container Cloud 2.24.3, in 2.24.4, 18 Common Vulnerabilities and Exposures (CVE) have been fixed: 3 of critical and 15 of high severity.

The summary table contains the total number of unique CVEs along with the total number of issues fixed across the images.

The full list of the CVEs present in the current Container Cloud release is available at the Mirantis Security Portal.

Addressed CVEs - summary

Severity

Critical

High

Total

Unique CVEs

1

10

11

Total issues across images

3

15

18

Addressed CVEs - detailed

Image

Component name

CVE

iam/keycloak-gatekeeper

golang.org/x/crypto

CVE-2021-43565 (High)

CVE-2022-27191 (High)

CVE-2020-29652 (High)

golang.org/x/net

CVE-2022-27664 (High)

CVE-2021-33194 (High)

golang.org/x/text

CVE-2021-38561 (High)

CVE-2022-32149 (High)

github.com/prometheus/client_golang

CVE-2022-21698 (High)

scale/psql-client

busybox

CVE-2022-48174 (Critical)

busybox-binsh

CVE-2022-48174 (Critical)

ssl_client

CVE-2022-48174 (Critical)

libpq

CVE-2023-39417 (High)

postgresql13-client

CVE-2023-39417 (High)

stacklight/alerta-web

grpcio

CVE-2023-33953 (High)

libpq

CVE-2023-39417 (High)

postgresql15-client

CVE-2023-39417 (High)

stacklight/pgbouncer

libpq

CVE-2023-39417 (High)

postgresql-client

CVE-2023-39417 (High)

Addressed issues

The following issues have been addressed in the Container Cloud patch release 2.24.4 along with the patch Cluster releases 14.0.3 and 15.0.3.

  • [34200][Ceph] Fixed the watch command missing in the rook-ceph-tools Pod.

  • [34836][Ceph] Fixed ceph-disk-daemon spawning a lot of zombie processes.

2.24.3

The Container Cloud patch release 2.24.3, which is based on the 2.24.2 major release, provides the following updates:

This patch release also supports the latest major Cluster releases 14.0.1 and 15.0.1. And it does not support greenfield deployments based on deprecated Cluster release 14.0.0 along with 12.7.x and 11.7.x series. Use the latest available Cluster releases instead.

For main deliverables of the parent Container Cloud releases of 2.24.3, refer to 2.24.0 and 2.24.1.

Artifacts

This section lists the components artifacts of the Container Cloud patch release 2.24.3. For artifacts of the Cluster releases introduced in 2.24.3, see Cluster releases 15.0.2 and 14.0.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20230606121129

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20230606121129

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-104-6e2e82c.tgz

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.37.23.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.37.23.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.37.23.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.37.23.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.37.23.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.37.23.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.37.23.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.37.23

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-alpine-20230810152159

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-alpine-20230803175048

bm-collective Updated

mirantis.azurecr.io/bm/bm-collective:base-alpine-20230810134945

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.37.23

ironic Updated

mirantis.azurecr.io/openstack/ironic:yoga-focal-20230810113432

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:yoga-focal-20230810113432

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20230531081117

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-alpine-20230810155639

kubernetes-entrypoint Updated

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-5359171-20230810125608

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.14-focal-20230730124341

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.22.0-63-g8f4f248

metallb-controller Updated

mirantis.azurecr.io/bm/metallb/controller:v0.13.9-53df4a9c-amd64

metallb-speaker Updated

mirantis.azurecr.io/bm/metallb/speaker:v0.13.9-53df4a9c-amd64

syslog-ng Updated

mirantis.azurecr.io/bm/syslog-ng:base-apline-20230814110635

Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com//core/binbootstrap-darwin-1.37.23.tgz

bootstrap-linux

https://binary.mirantis.com//core/binbootstrap-linux-1.37.23.tgz

Helm charts

admission-controller Updated

https://binary.mirantis.com/core/helm/admission-controller-1.37.23.tgz

agent-controller Updated

https://binary.mirantis.com/core/helm/agent-controller-1.37.23.tgz

ceph-kcc-controller Updated

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.37.23.tgz

cert-manager Updated

https://binary.mirantis.com/core/helm/cert-manager-1.37.23.tgz

client-certificate-controller Updated

https://binary.mirantis.com/core/helm/client-certificate-controller-1.37.23.tgz

event-controller Updated

https://binary.mirantis.com/core/helm/event-controller-1.37.23.tgz

iam-controller Updated

https://binary.mirantis.com/core/helm/iam-controller-1.37.23.tgz

kaas-exporter Updated

https://binary.mirantis.com/core/helm/kaas-exporter-1.37.23.tgz

kaas-public-api Updated

https://binary.mirantis.com/core/helm/kaas-public-api-1.37.23.tgz

kaas-ui Updated

https://binary.mirantis.com/core/helm/kaas-ui-1.37.23.tgz

lcm-controller Updated

https://binary.mirantis.com/core/helm/lcm-controller-1.37.23.tgz

license-controller Updated

https://binary.mirantis.com/core/helm/license-controller-1.37.23.tgz

machinepool-controller Updated

https://binary.mirantis.com/core/helm/machinepool-controller-1.37.23.tgz

mcc-cache Updated

https://binary.mirantis.com/core/helm/mcc-cache-1.37.23.tgz

mcc-cache-warmup Updated

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.37.23.tgz

metrics-server Updated

https://binary.mirantis.com/core/helm/metrics-server-1.37.23.tgz

openstack-provider Updated

https://binary.mirantis.com/core/helm/openstack-provider-1.37.23.tgz

os-credentials-controller Updated

https://binary.mirantis.com/core/helm/os-credentials-controller-1.37.23.tgz

portforward-controller Updated

https://binary.mirantis.com/core/helm/portforward-controller-1.37.23.tgz

proxy-controller Updated

https://binary.mirantis.com/core/helm/proxy-controller-1.37.23.tgz

rbac-controller Updated

https://binary.mirantis.com/core/helm/rbac-controller-1.37.23.tgz

release-controller Updated

https://binary.mirantis.com/core/helm/release-controller-1.37.23.tgz

rhellicense-controller Updated

https://binary.mirantis.com/core/helm/rhellicense-controller-1.37.23.tgz

scope-controller Updated

https://binary.mirantis.com/core/helm/scope-controller-1.37.23.tgz

squid-proxy Updated

https://binary.mirantis.com/core/helm/squid-proxy-1.37.23.tgz

user-controller Updated

https://binary.mirantis.com/core/helm/user-controller-1.37.23.tgz

vsphere-credentials-controller Updated

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.37.23.tgz

vsphere-provider Updated

https://binary.mirantis.com/core/helm/vsphere-provider-1.37.23.tgz

vsphere-vm-template-controller Updated

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.37.23.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.37.23

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.37.23

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.37.23

cert-manager-controller Updated

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0-2

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.37.23

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.37.23

frontend Updated

mirantis.azurecr.io/core/frontend:1.37.23

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.37.23

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.37.23

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.37.23

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.37.23

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.37.23

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.37.23

mcc-haproxy Updated

mirantis.azurecr.io/lcm/mcc-haproxy:v0.22.0-63-g8f4f248

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.22.0-63-g8f4f248

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-2

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.37.23

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager-amd64:v1.24.5-10-g93314b86

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.37.23

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.37.23

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.37.23

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.37.23

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.37.23

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-4

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.37.23

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.37.23

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.37.23

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.37.23

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.37.23

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.37.23

vsphere-vm-template-controller Updated

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.37.23

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts

iam Updated

https://binary.mirantis.com/iam/helm/iam-2.5.4.tgz

Docker images

keycloak

mirantis.azurecr.io/iam/keycloak:0.6.0

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-27d64fb-20230421151539

mariadb

mirantis.azurecr.io/general/mariadb:10.6.12-focal-20230423170220

Security notes

In total, since Container Cloud 2.24.1, in 2.24.3, 63 Common Vulnerabilities and Exposures (CVE) with high severity have been fixed.

The summary table contains the total number of unique CVEs along with the total number of issues fixed across the images.

The full list of the CVEs present in the current Container Cloud release is available at the Mirantis Security Portal.

Addressed CVEs - summary

Severity

Critical

High

Total

Unique CVEs

0

15

15

Total issues across images

0

63

63

Addressed CVEs - detailed

Image

Component name

CVE

bm/external/metallb/controller

libcrypto3

CVE-2023-0464 (High)

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

CVE-2023-0464 (High)

golang.org/x/net

CVE-2022-41723 (High)

bm/external/metallb/speaker

libcrypto3

CVE-2023-2650 (High)

CVE-2023-0464 (High)

libssl3

CVE-2023-0464 (High)

CVE-2023-2650 (High)

golang.org/x/net

CVE-2022-41723 (High)

core/external/cert-manager-cainjector

golang.org/x/net

CVE-2022-41723 (High)

core/external/cert-manager-controller

golang.org/x/net

CVE-2022-41723 (High)

core/external/cert-manager-webhook

golang.org/x/net

CVE-2022-41723 (High)

core/external/nginx

nghttp2-libs

CVE-2023-35945 (High)

core/frontend

nghttp2-libs

CVE-2023-35945 (High)

lcm/external/csi-attacher

github.com/prometheus/client_golang

CVE-2022-21698 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

gopkg.in/yaml.v3

CVE-2022-28948 (High)

lcm/external/csi-node-driver-registrar

github.com/prometheus/client_golang

CVE-2022-21698 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

lcm/external/csi-provisioner

golang.org/x/crypto

CVE-2021-43565 (High)

CVE-2022-27191 (High)

github.com/prometheus/client_golang

CVE-2022-21698 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

gopkg.in/yaml.v3

CVE-2022-28948 (High)

lcm/external/csi-resizer

github.com/prometheus/client_golang

CVE-2022-21698 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

gopkg.in/yaml.v3

CVE-2022-28948 (High)

lcm/external/csi-snapshotter

github.com/prometheus/client_golang

CVE-2022-21698 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

gopkg.in/yaml.v3

CVE-2022-28948 (High)

lcm/external/livenessprobe

golang.org/x/text

CVE-2021-38561 (High)

CVE-2022-32149 (High)

github.com/prometheus/client_golang

CVE-2022-21698 (High)

golang.org/x/net

CVE-2022-27664 (High)

lcm/kubernetes/cinder-csi-plugin-amd64

libpython3.7-minimal

CVE-2021-3737 (High)

CVE-2020-10735 (High)

CVE-2022-45061 (High)

CVE-2015-20107 (High)

libpython3.7-stdlib

CVE-2021-3737 (High)

CVE-2020-10735 (High)

CVE-2022-45061 (High)

CVE-2015-20107 (High)

python3.7

CVE-2021-3737 (High)

CVE-2020-10735 (High)

CVE-2022-45061 (High)

CVE-2015-20107 (High)

python3.7-minimal

CVE-2021-3737 (High)

CVE-2020-10735 (High)

CVE-2022-45061 (High)

CVE-2015-20107 (High)

libssl1.1

CVE-2023-2650 (High)

CVE-2023-0464 (High)

openssl

CVE-2023-2650 (High)

CVE-2023-0464 (High)

lcm/mcc-haproxy

nghttp2-libs

CVE-2023-35945 (High)

openstack/ironic

cryptography

CVE-2023-2650 (High)

openstack/ironic-inspector

cryptography

CVE-2023-2650 (High)

Addressed issues

The following issues have been addressed in the Container Cloud patch release 2.24.3 along with the patch Cluster releases 14.0.2 and 15.0.2.

  • [34638][BM] Fixed the issue with failure to delete a management cluster due to the issue with secrets during machine deletion.

  • [34220][BM] Fixed the issue with ownerReferences being lost for HardwareData after pivoting during a management cluster bootstrap.

  • [34280][LCM] Fixed the issue with no cluster reconciles generated if a cluster is stuck on waiting for agents upgrade.

  • [33439][TLS] Fixed the issue with client-certificate-controller silently replacing user-provided key if PEM header and key format do not match.

  • [33686][audit] Fixed the issue with rules provided by the docker auditd preset not covering the Sysdig Docker CIS benchmark.

  • [34080][StackLight] Fixed the issue with missing events in OpenSearch that have lastTimestamp set to null and eventTime set to a non-null value.

2.24.2

The Container Cloud major release 2.24.2 based on 2.24.0 and 2.24.1 provides the following:

  • Introduces support for the major Cluster release 15.0.1 that is based on the Cluster release 14.0.1 and represents Mirantis OpenStack for Kubernetes (MOSK) 23.2. This Cluster release is based on the updated version of Mirantis Kubernetes Engine 3.6.5 with Kubernetes 1.24 and Mirantis Container Runtime 20.10.17.

  • Supports the latest Cluster release 14.0.1.

  • Does not support greenfield deployments based on deprecated Cluster release 14.0.0 along with 12.7.x and 11.7.x series. Use the latest available Cluster releases of the series instead.

For main deliverables of the Container Cloud release 2.24.2, refer to its parent release 2.24.0:

Caution

Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

2.24.1

The Container Cloud patch release 2.24.1 based on 2.24.0 includes updated baremetal-operator, admission-controller, and iam artifacts and provides hot fixes for the following issues:

  • [34218] Fixed the issue with the iam-keycloak Pod being stuck in the Pending state during Keycloak upgrade to version 21.1.1.

  • [34247] Fixed the issue with MKE backup failing during cluster update due to wrong permissions in the etcd backup directory. If the issue still persists, which may occur on clusters that were originally deployed using early Container Cloud releases delivered in 2020-2021, follow the workaround steps described in Known issues: LCM.

Note

Container Cloud patch release 2.24.1 does not introduce new Cluster releases.

For main deliverables of the Container Cloud release 2.24.1, refer to its parent release 2.24.0:

Caution

Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

2.24.0

Important

Container Cloud 2.24.0 has been successfully applied to a certain number of clusters. The 2.24.0 related documentation content fully applies to these clusters.

If your cluster started to update but was reverted to the previous product version or the update is stuck, you automatically receive the 2.24.1 patch release with the bug fixes to unblock the update to the 2.24 series.

There is no impact on the cluster workloads. For details on the patch release, see 2.24.1.

The Mirantis Container Cloud GA release 2.24.0:

  • Introduces support for the Cluster release 14.0.0 that is based on Mirantis Container Runtime 20.10.17 and Mirantis Kubernetes Engine 3.6.5 with Kubernetes 1.24.

  • Supports the latest major and patch Cluster releases of the 12.7.x series that supports Mirantis OpenStack for Kubernetes (MOSK) 23.1 series.

  • Does not support greenfield deployments on deprecated Cluster releases 12.7.3, 11.7.4, or earlier patch releases, 12.5.0, or 11.7.0. Use the latest available Cluster releases of the series instead.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.24.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.24.0. For the list of enhancements in the Cluster release 14.0.0 that is introduced by the Container Cloud release 2.24.0, see the 14.0.0.

Automated upgrade of operating system on bare metal clusters

Support status of the feature

  • Since MOSK 23.2, the feature is generally available for MOSK clusters.

  • Since Container Cloud 2.24.2, the feature is generally available for any type of bare metal clusters.

  • Since Container Cloud 2.24.0, the feature is available as Technology Preview for management and regional clusters only.

Implemented automatic in-place upgrade of an operating system (OS) distribution on bare metal clusters. The OS upgrade occurs as part of cluster update that requires machines reboot. The OS upgrade workflow is as follows:

  1. The distribution ID value is taken from the id field of the distribution from the allowedDistributions list in the spec of the ClusterRelease object.

  2. The distribution that has the default: true value is used during update. This distribution ID is set in the spec:providerSpec:value:distribution field of the Machine object during cluster update.

On management and regional clusters, the operating system upgrades automatically during cluster update. For managed clusters, an in-place OS distribution upgrade should be performed between cluster updates. This scenario implies a machine cordoning, draining, and reboot.

Warning

During the course of the Container Cloud 2.28.x series, Mirantis highly recommends upgrading an operating system on any nodes of all your managed cluster machines to Ubuntu 22.04 before the next major Cluster release becomes available.

It is not mandatory to upgrade all machines at once. You can upgrade them one by one or in small batches, for example, if the maintenance window is limited in time.

Otherwise, the Cluster release update of the Ubuntu 20.04-based managed clusters will become impossible as of Container Cloud 2.29.0 with Ubuntu 22.04 as the only supported version.

Management cluster update to Container Cloud 2.29.1 will be blocked if at least one node of any related managed cluster is running Ubuntu 20.04.

Support for WireGuard on bare metal clusters

TechPreview

Added initial Technology Preview support for WireGuard that enables traffic encryption on the Kubernetes workloads network. Set secureOverlay: true in the Cluster object during deployment of management, regional, or managed bare metal clusters to enable WireGuard encryption.

Also, added the possibility to configure the maximum transmission unit (MTU) size for Calico that is required for the WireGuard functionality and allows maximizing network performance.

Note

For MOSK-based deployments, the feature support is available since MOSK 23.2.

MetalLB configuration changes for bare metal and vSphere

For management and regional clusters

Caution

For managed clusters, this object is available as Technology Preview and will become generally available in one of the following Container Cloud releases.

Introduced the following MetalLB configuration changes and objects related to address allocation and announcement of services LB for bare metal and vSphere providers:

  • Introduced the MetalLBConfigTemplate object for bare metal and the MetalLBConfig object for vSphere to be used as default and recommended.

  • For vSphere, during creation of clusters of any type, now a separate MetalLBConfig object is created instead of corresponding settings in the Cluster object.

  • The use of either Subnet objects without the new MetalLB objects or the configInline MetalLB value of the Cluster object is deprecated and will be removed in one of the following releases.

  • If the MetalLBConfig object is not used for MetalLB configuration related to address allocation and announcement of services LB, then automated migration applies during creation of clusters of any type or cluster update to Container Cloud 2.24.0.

    During automated migration, the MetalLBConfig and MetalLBConfigTemplate objects for bare metal or the MetalLBConfig for vSphere are created and contents of the MetalLB chart configInline value is converted to the parameters of the MetalLBConfigTemplate object for bare metal or of the MetalLBConfig object for vSphere.

The following changes apply to the bare metal bootstrap procedure:

  • Moved the following environment variables from cluster.yaml.template to the dedicated ipam-objects.yaml.template:

    • BOOTSTRAP_METALLB_ADDRESS_POOL

    • KAAS_BM_BM_DHCP_RANGE

    • SET_METALLB_ADDR_POOL

    • SET_LB_HOST

  • Modified the default network configuration. Now it includes a bond interface and separated PXE and management networks. Mirantis recommends using separate PXE and management networks for management and regional clusters.

Support for RHEL 8.7 on the vSphere provider

TechPreview

Added support for RHEL 8.7 on the vSphere-based management, regional, and managed clusters.

Custom flavors for Octavia on OpenStack-based clusters

Implemented the possibility to use custom Octavia Amphora flavors that you can enable in spec:providerSpec section of the Cluster object using serviceAnnotations:loadbalancer.openstack.org/flavor-id during management or regional cluster deployment.

Note

For managed clusters, you can enable the feature through the Container Cloud API. The web UI functionality will be added in one of the following Container Cloud releases.

Deletion of persistent volumes during an OpenStack-based cluster deletion

Completed the development of persistent volumes deletion during an OpenStack-based managed cluster deletion by implementing the Delete all volumes in the cluster check box in the cluster deletion menu of the Container Cloud web UI.

Support for Keycloak Quarkus

Upgraded the Keycloak major version from 18.0.0 to 21.1.1. For the list of new features and enhancements, see Keycloak Release Notes.

The upgrade path is fully automated. No data migration or custom LCM changes are required.

Important

After the Keycloak upgrade, access the Keycloak Admin Console using the new URL format: https://<keycloak.ip>/auth instead of https://<keycloak.ip>. Otherwise, the Resource not found error displays in a browser.

Custom host names for cluster machines

TechPreview

Added initial Technology Preview support for custom host names of machines on any supported provider and any cluster type. When enabled, any machine host name in a particular region matches the related Machine object name. For example, instead of the default kaas-node-<UID>, a machine host name will be master-0. The custom naming format is more convenient and easier to operate with.

You can enable the feature before or after management or regional cluster deployment. If enabled after deployment, custom host names will apply to all newly deployed machines in the region. Existing host names will remain the same.

Parallel update of worker nodes

TechPreview

Added initial Technology Preview support for parallelizing of node update operations that significantly improves the efficiency of your cluster. To configure the parallel node update, use the following parameters located under spec.providerSpec of the Cluster object:

  • maxWorkerUpgradeCount - maximum number of worker nodes for simultaneous update to limit machine draining during update

  • maxWorkerPrepareCount - maximum number of workers for artifacts downloading to limit network load during update

Note

For MOSK clusters, you can start using this feature during cluster update from 23.1 to 23.2. For details, see MOSK documentation: Parallelizing node update operations.

Cache warm-up for managed clusters

Implemented the CacheWarmupRequest resource to predownload, aka warm up, a list of artifacts included in a given set of Cluster releases into the mcc-cache service only once per release. The feature facilitates and speeds up deployment and update of managed clusters.

After a successful cache warm-up, the object of the CacheWarmupRequest resource is automatically deleted from the cluster and cache remains for managed clusters deployment or update until next Container Cloud auto-upgrade of the management or regional cluster.

Caution

If the disk space for cache runs out, the cache for the oldest object is evicted. To avoid running out of space in the cache, verify and adjust its size before each cache warm-up.

Note

For MOSK-based deployments, the feature support is available since MOSK 23.2.

Support for auditd

TechPreview

Added initial Technology Preview support for the Linux Audit daemon auditd to monitor activity of cluster processes on any type of Container Cloud cluster. The feature is an essential requirement for many security guides that enables auditing of any cluster process to detect potential malicious activity.

You can enable and configure auditd either during or after cluster deployment using the Cluster object.

Note

For MOSK-based deployments, the feature support is available since MOSK 23.2.

Enhancements for TLS certificates configuration

TechPreview

Enhanced TLS certificates configuration for cluster applications:

  • Added support for configuration of TLS certificates for MKE on management or regional clusters to the existing support on managed clusters.

  • Implemented the ability to configure TLS certificates using the Container Cloud web UI through the Security section located in the More > Configure cluster menu.

Graceful cluster reboot using web UI

Expanded the capability to perform a graceful reboot on a management, regional, or managed cluster for all supported providers by adding the Reboot machines option to the cluster menu in the Container Cloud web UI. The feature allows for a rolling reboot of all cluster machines without workloads interruption. The reboot occurs in the order of cluster upgrade policy.

Note

For MOSK-based deployments, the feature support is available since MOSK 23.2.

Creation and deletion of bare metal host credentials using web UI

Improved management of bare metal host credentials using the Container Cloud web UI:

  • Added the Add Credential menu to the Credentials tab. The feature facilitates association of credentials with bare metal hosts created using the BM Hosts tab.

  • Implemented automatic deletion of credentials during deletion of bare metal hosts after deletion of managed cluster.

Node labeling improvements in web UI

Improved the Node Labels menu in the Container Cloud web UI by making it more intuitive. Replaced the greyed out (disabled) label names with the No labels have been assigned to this machine. message and the Add a node label button link.

Also, added the possibility to configure node labels for machine pools after deployment using the More > Configure Pool option.

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added the documentation on managing Ceph OSDs with a separate metadata device.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.24.0 along with the Cluster release 14.0.0. For the list of hot fixes delivered in the 2.24.1 patch release, see 2.24.1.

  • [5981] Fixed the issue with upgrade of a cluster containing more than 120 nodes getting stuck on one node with errors about IP addresses exhaustion in the docker logs. On existing clusters, after updating to the Cluster release 14.0.0 or later, you can optionally remove the abandoned mke-overlay network using docker network rm mke-overlay.

  • [29604] Fixed the issue with the false positive failed to get kubeconfig error occurring on the Waiting for TLS settings to be applied stage during TLS configuration.

  • [29762] Fixed the issue with a wrong IP address being assigned after the MetalLB controller restart.

  • [30635] Fixed the issue with the pg_autoscaler module of Ceph Manager failing with the pool <poolNumber> has overlapping roots error if a Ceph cluster contains a mix of pools with deviceClass either explicitly specified or not specified.

  • [30857] Fixed the issue with irrelevant error message displaying in the osd-prepare Pod during the deployment of Ceph OSDs on removable devices on AMD nodes. Now, the error message clearly states that removable devices (with hotplug enabled) are not supported for deploying Ceph OSDs. This issue has been addressed since the Cluster release 14.0.0.

  • [30781] Fixed the issue with cAdvisor failing to collect metrics on CentOS-based deployments. Missing metrics affected the KubeContainersCPUThrottlingHigh alert and the following Grafana dashboards: Kubernetes Containers, Kubernetes Pods, and Kubernetes Namespaces.

  • [31288] Fixed the issue with Fluentd agent failing and the fluentd-logs Pods reporting the maximum open shards limit error, thus preventing OpenSearch to accept new logs. The fix enables the possibility to increase the limit for maximum open shards using cluster.max_shards_per_node. For details, see Tune StackLight for long-term log retention.

  • [31485] Fixed the issue with Elasticsearch Curator not deleting indices according to the configured retention period on any type of Container Cloud clusters.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud releases 2.24.0 and 2.24.1 including the Cluster release 14.0.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


Bare metal
[42386] A load balancer service does not obtain the external IP address

Due to the MetalLB upstream issue, a load balancer service may not obtain the external IP address.

The issue occurs when two services share the same external IP address and have the same externalTrafficPolicy value. Initially, the services have the external IP address assigned and are accessible. After modifying the externalTrafficPolicy value for both services from Cluster to Local, the first service that has been changed remains with no external IP address assigned. Though, the second service, which was changed later, has the external IP assigned as expected.

To work around the issue, make a dummy change to the service object where external IP is <pending>:

  1. Identify the service that is stuck:

    kubectl get svc -A | grep pending
    

    Example of system response:

    stacklight  iam-proxy-prometheus  LoadBalancer  10.233.28.196  <pending>  443:30430/TCP
    
  2. Add an arbitrary label to the service that is stuck. For example:

    kubectl label svc -n stacklight iam-proxy-prometheus reconcile=1
    

    Example of system response:

    service/iam-proxy-prometheus labeled
    
  3. Verify that the external IP was allocated to the service:

    kubectl get svc -n stacklight iam-proxy-prometheus
    

    Example of system response:

    NAME                  TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)        AGE
    iam-proxy-prometheus  LoadBalancer  10.233.28.196  10.0.34.108  443:30430/TCP  12d
    
[36131] Changes in ‘IpamHost’ are not propagated to ‘LCMMachine’

Fixed in 17.0.0 and 16.0.0

During netplan configuration after cluster deployment, changes in the IpamHost object are not propagated to LCMMachine.

The workaround is to manually add any new label to the labels section of the Machine object for the target host, which triggers machine reconciliation and propagates network changes.

[35429] The WireGuard interface does not have the IPv4 address assigned

Fixed in 17.0.0 and 16.0.0

Due to the upstream Calico issue, on clusters with WireGuard enabled, the WireGuard interface on a node may not have the IPv4 address assigned. This leads to broken inter-Pod communication between the affected node and other cluster nodes.

The node is affected if the IP address is missing on the WireGuard interface:

ip a show wireguard.cali

Example of system response:

40: wireguard.cali: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000 link/none

The workaround is to manually restart the calico-node Pod to allocate the IPv4 address on the WireGuard interface:

docker restart $(docker ps -f "label=name=Calico node" -q)
[34280] No reconcile events generated during cluster update

Fixed in 15.0.2 and 14.0.2

The cluster update is stuck on waiting for agents to upgrade with the following message in the cluster status:

Helm charts are not installed(upgraded) yet. Not ready releases: managed-lcm-api

The workaround is to retrigger the cluster update, for example, by adding an annotation to the cluster object:

  1. Log in to a local machine where your management cluster kubeconfig is located and where kubectl is installed.

  2. Open the management Cluster object for editing:

    kubectl edit cluster <mgmtClusterName>
    
  3. Set the annotation force-reconcile: true.

[34210] Helm charts installation failure during cluster update

Fixed in 14.0.0

The cluster update is blocked with the following message in the cluster status:

Helm charts are not installed(upgraded) yet.
Not ready releases: iam, managed-lcm-api, admission-controller, baremetal-operator.

Workaround:

  1. Log in to a local machine where your management cluster kubeconfig is located and where kubectl is installed.

  2. Open the baremetal-operator deployment object for editing:

    kubectl edit deploy -n kaas baremetal-operator
    
  3. Modify the image that the init container and the container are using to mirantis.azurecr.io/bm/baremetal-operator:base-alpine-20230721153358.

The baremetal-operator pods will be re-created, and the cluster update will get unblocked.

[33936] Deletion failure of a controller node during machine replacement

Fixed in 17.0.1 and 16.0.1 for MKE 3.7.2

Due to the upstream Calico issue, a controller node cannot be deleted if the calico-node Pod is stuck blocking node deletion. One of the symptoms is the following warning in the baremetal-operator logs:

Resolving dependency Service dhcp-lb in namespace kaas failed: \
the server was unable to return a response in the time allotted,\
but may still be processing the request (get endpoints dhcp-lb).

As a workaround, delete the Pod that is stuck to retrigger the node deletion.

[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.

[20736] Region deletion failure after regional deployment failure

If a baremetal-based regional cluster deployment fails before pivoting is done, the corresponding region deletion fails.

Workaround:

Using the command below, manually delete all possible traces of the failed regional cluster deployment, including but not limited to the following objects that contain the kaas.mirantis.com/region label of the affected region:

  • cluster

  • machine

  • baremetalhost

  • baremetalhostprofile

  • l2template

  • subnet

  • ipamhost

  • ipaddr

kubectl delete <objectName> -l kaas.mirantis.com/region=<regionName>

Warning

Do not use the same region name again after the regional cluster deployment failure since some objects that reference the region name may still exist.



LCM
[31186,34132] Pods get stuck during MariaDB operations

During MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:

[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49

Workaround:

  1. Create a backup of the /var/lib/mysql directory on the mariadb-server Pod.

  2. Verify that other replicas are up and ready.

  3. Remove the galera.cache file for the affected mariadb-server Pod.

  4. Remove the affected mariadb-server Pod or wait until it is automatically restarted.

After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes and restores the quorum.

[32761] Node cleanup fails due to remaining devices

Fixed in 17.1.0 and 16.1.0

On MOSK clusters, the Ansible provisioner may hang in a loop while trying to remove LVM thin pool logical volumes (LVs) due to issues with volume detection before removal. The Ansible provisioner cannot remove LVM thin pool LVs correctly, so it consistently detects the same volumes whenever it scans disks, leading to a repetitive cleanup process.

The following symptoms mean that a cluster can be affected:

  • A node was configured to use thin pool LVs. For example, it had the OpenStack Cinder role in the past.

  • A bare metal node deployment flaps between provisioninig and deprovisioning states.

  • In the Ansible provisioner logs, the following example warnings are growing:

    88621.log:7389:2023-06-22 16:30:45.109 88621 ERROR ansible.plugins.callback.ironic_log
    [-] Ansible task clean : fail failed on node 14eb0dbc-c73a-4298-8912-4bb12340ff49:
    {'msg': 'There are more devices to clean', '_ansible_no_log': None, 'changed': False}
    

    Important

    There are more devices to clean is a regular warning indicating some in-progress tasks. But if the number of such warnings is growing along with the node flapping between provisioninig and deprovisioning states, the cluster is highly likely affected by the issue.

As a workaround, erase disks manually using any preferred tool.

[34247] MKE backup fails during cluster update

Fixed in 14.0.0

MKE backup may fail during update of a management, regional, or managed cluster due to wrong permissions in the etcd backup /var/lib/docker/volumes/ucp-backup/_data directory.

The issue affects only clusters that were originally deployed using early Container Cloud releases delivered in 2020-2021.

Workaround:

  1. Fix permissions on all affected nodes:

    chown -R nobody:nogroup /var/lib/docker/volumes/ucp-backup/_data
    
  2. Using the admin kubeconfig, increase the mkeUpgradeAttempts value:

    1. Open the LCMCluster object of the management cluster for editing:

      kubectl edit lcmcluster <mgmtClusterName>
      
    2. In the mkeUpgradeAttempts field, increase the value to 6. Once done, MKE backup retriggers automatically.

[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>

Ceph
[34820] The Ceph ‘rook-operator’ fails to connect to RGW on FIPS nodes

Fixed in 17.1.0 and 16.1.0

Due to the upstream Ceph issue, on clusters with the Federal Information Processing Standard (FIPS) mode enabled, the Ceph rook-operator fails to connect to Ceph RADOS Gateway (RGW) pods.

As a workaround, do not place Ceph RGW pods on nodes where FIPS mode is enabled.

[34599] Ceph ‘ClusterWorkloadLock’ blocks upgrade from 2.23.5 to 2.24.1

On management clusters based on Ubuntu 18.04, after the cluster starts upgrading from 2.23.5 to 2.24.1, all controller machines are stuck in the In Progress state with the Distribution update in progress hover message displaying in the Container Cloud web UI.

The issue is caused by clusterworkloadlock containing the outdated release name in the status.release field, which blocks LCM Controller to proceed with machine upgrade. This behavior is caused by a complete removal of the ceph-controller chart from management clusters and a failed ceph-clusterworkloadlock removal.

The workaround is to manually remove ceph-clusterworkloadlock from the management cluster to unblock upgrade:

kubectl delete clusterworkloadlock ceph-clusterworkloadlock
[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.

Update
[33438] ‘CalicoDataplaneFailuresHigh’ alert is firing during cluster update

During cluster update of a managed bare metal cluster, the false positive CalicoDataplaneFailuresHigh alert may be firing. Disregard this alert, which will disappear once cluster update succeeds.

The observed behavior is typical for calico-node during upgrades, as workload changes occur frequently. Consequently, there is a possibility of temporary desynchronization in the Calico dataplane. This can occasionally result in throttling when applying workload changes to the Calico dataplane.

Components versions

The following table lists the major components and their versions delivered in the Container Cloud releases 2.24.0 - 2.24.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

Bare metal Updated

ambassador

1.37.15

baremetal-operator

base-alpine-20230607164516 2.24.0

base-alpine-20230721153358 2.24.1(2)

baremetal-public-api

1.37.15

baremetal-provider

1.37.15

ironic

yoga-focal-20230605060019

kaas-ipam

base-alpine-20230614192933

keepalived

0.22.0-49-g9618f2a

local-volume-provisioner

2.5.0-4

mariadb

10.6.12-focal-20230606052917

IAM Updated

iam

2.5.1 2.24.0

2.5.3 2.24.1(2)

iam-controller

1.37.15

keycloak

21.1.1

Container Cloud Updated

admission-controller

1.37.15 2.24.0

1.37.16 2.24.1

1.37.19 2.24.2

agent-controller

1.37.15

byo-credentials-controller Removed

n/a

byo-provider Removed

n/a

ceph-kcc-controller

1.37.15

cert-manager

1.37.15

client-certificate-controller

1.37.15

event-controller

1.37.15

golang

1.20.4-alpine3.17

kaas-public-api

1.37.15

kaas-exporter

1.37.15

kaas-ui

1.37.15

license-controller

1.37.15

lcm-controller

1.37.15

machinepool-controller

1.37.15

mcc-cache

1.37.15

portforward-controller

1.37.15

proxy-controller

1.37.15

rbac-controller

1.37.15

release-controller

1.37.15

rhellicense-controller

1.37.15

scope-controller

1.37.15

user-controller

1.37.15

OpenStack Updated

openstack-provider

1.37.15

os-credentials-controller

1.37.15

VMware vSphere Updated

vsphere-provider

1.37.15

vsphere-credentials-controller

1.37.15

keepalived

0.22.0-49-g9618f2a

squid-proxy

0.0.1-10-g24a0d69

Artifacts

This section lists the component artifacts of the Container Cloud releases 2.24.0 - 2.24.2.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

ironic-python-agent.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20230606121129

ironic-python-agent.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20230606121129

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-104-6e2e82c.tgz

Helm charts Updated

baremetal-api

https://binary.mirantis.com/core/helm/baremetal-api-1.37.15.tgz

baremetal-operator

https://binary.mirantis.com/core/helm/baremetal-operator-1.37.15.tgz 2.24.0

https://binary.mirantis.com/core/helm/baremetal-operator-1.37.16.tgz 2.24.1(2)

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.37.15.tgz

baremetal-public-api

https://binary.mirantis.com/core/helm/baremetal-public-api-1.37.15.tgz

kaas-ipam

https://binary.mirantis.com/core/helm/kaas-ipam-1.37.15.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.37.15.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.37.15.tgz

Docker images

ambasador Updated

mirantis.azurecr.io/core/external/nginx:1.37.15

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-alpine-20230607171021

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-alpine-20230607164516 2.24.0

mirantis.azurecr.io/bm/baremetal-operator:base-alpine-20230721153358 2.24.1(2)

bm-collective Updated

mirantis.azurecr.io/bm/bm-collective:base-alpine-20230607154546

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.37.15

ironic Updated

mirantis.azurecr.io/openstack/ironic:yoga-focal-20230605060019

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:yoga-focal-20230605060019

ironic-prometheus-exporter Updated

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20230531081117

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-alpine-20230614192933

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-27d64fb-20230421151539

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.12-focal-20230606052917

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.22.0-49-g9618f2a

metallb-controller Updated

mirantis.azurecr.io/bm/external/metallb/controller:v0.13.9

metallb-speaker Updated

mirantis.azurecr.io/bm/external/metallb/speaker:v0.13.9

syslog-ng Updated

mirantis.azurecr.io/bm/syslog-ng:base-apline-20230607165607


Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.37.15.tgz 2.24.0(1)

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.37.19.tgz 2.24.2

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.37.15.tgz 2.24.0(1)

https://binary.mirantis.com/core/bin/bootstrap-linux-1.37.19.tgz 2.24.2

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.37.15.tgz 2.24.0

https://binary.mirantis.com/core/helm/admission-controller-1.37.16.tgz 2.24.1(2)

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.37.15.tgz

byo-credentials-controller Removed

n/a

byo-provider Removed

n/a

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.37.15.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.37.15.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.37.15.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.37.15.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.37.15.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.37.15.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.37.15.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.37.15.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.37.15.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.37.15.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.37.15.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.37.15.tgz

mcc-cache-warmup New

https://binary.mirantis.com/core/helm/mcc-cache-warmup-1.37.15.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.37.15.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.37.15.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.37.15.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.37.15.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.37.15.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.37.15.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.37.15.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.37.15.tgz

scope-controller

https://binary.mirantis.com/core/helm/scope-controller-1.37.15.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.37.15.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.37.15.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.37.15.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.37.15.tgz

vsphere-vm-template-controller

https://binary.mirantis.com/core/helm/vsphere-vm-template-controller-1.37.15.tgz

Docker images Updated

admission-controller

mirantis.azurecr.io/core/admission-controller:1.37.15 2.24.0

mirantis.azurecr.io/core/admission-controller:1.37.16 2.24.1(2)

agent-controller

mirantis.azurecr.io/core/agent-controller:1.37.15

byo-cluster-api-controller Removed

n/a

byo-credentials-controller Removed

n/a

ceph-kcc-controller

mirantis.azurecr.io/core/ceph-kcc-controller:1.37.15

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.11.0

client-certificate-controller

mirantis.azurecr.io/core/client-certificate-controller:1.37.15

event-controller

mirantis.azurecr.io/core/event-controller:1.37.15

frontend

mirantis.azurecr.io/core/frontend:1.37.15

iam-controller

mirantis.azurecr.io/core/iam-controller:1.37.15

kaas-exporter

mirantis.azurecr.io/core/kaas-exporter:1.37.15

kproxy

mirantis.azurecr.io/core/kproxy:1.37.15

lcm-controller

mirantis.azurecr.io/core/lcm-controller:1.37.15

license-controller

mirantis.azurecr.io/core/license-controller:1.37.15

machinepool-controller

mirantis.azurecr.io/core/machinepool-controller:1.37.15

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.22.0-49-g9618f2a

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.22.0-49-g9618f2a

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-2

nginx

mirantis.azurecr.io/core/external/nginx:1.37.15

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager-amd64:v1.24.5-10-g93314b86

openstack-cluster-api-controller

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.37.15

os-credentials-controller

mirantis.azurecr.io/core/os-credentials-controller:1.37.15

portforward-controller

mirantis.azurecr.io/core/portforward-controller:1.37.15

proxy-controller

mirantis.azurecr.io/core/proxy-controller:1.37.15

rbac-controller

mirantis.azurecr.io/core/rbac-controller:1.37.15

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-4

release-controller

mirantis.azurecr.io/core/release-controller:1.37.15

rhellicense-controller

mirantis.azurecr.io/core/rhellicense-controller:1.37.15

scope-controller

mirantis.azurecr.io/core/scope-controller:1.37.15

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller

mirantis.azurecr.io/core/user-controller:1.37.15

vsphere-cluster-api-controller

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.37.15

vsphere-credentials-controller

mirantis.azurecr.io/core/vsphere-credentials-controller:1.37.15

vsphere-vm-template-controller

mirantis.azurecr.io/core/vsphere-vm-template-controller:1.37.15


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

Helm charts

iam Updated

https://binary.mirantis.com/iam/helm/iam-2.5.1.tgz 2.24.0

https://binary.mirantis.com/iam/helm/iam-2.5.3.tgz 2.24.1(2)

Docker images Updated

keycloak

mirantis.azurecr.io/iam/keycloak:0.6.0

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-27d64fb-20230421151539

mariadb

mirantis.azurecr.io/general/mariadb:10.6.12-focal-20230423170220

Security notes

In total, since Container Cloud 2.23.0 major release, in 2.24.0, 2130 Common Vulnerabilities and Exposures (CVE) have been fixed: 98 of critical and 2032 of high severity.

Among them, 984 CVEs that are listed in Addressed CVEs - detailed Addressed CVEs - detailed have been fixed since the 2.23.5 patch release: 62 of critical and 922 of high severity. The remaining CVEs were addressed since Container Cloud 2.23.0 and the fixes released with the patch releases of the 2.23.x series.

The summary table contains the total number of unique CVEs along with the total number of issues fixed across the images.

The full list of the CVEs present in the current Container Cloud release is available at the Mirantis Security Portal.

Addressed CVEs - summary

Severity

Critical

High

Total

Unique CVEs

18

88

106

Total issues across images

62

922

984

Addressed CVEs - detailed

Image

Component name

CVE

bm/baremetal-dnsmasq

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

bm/baremetal-operator

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

cryptography

CVE-2023-2650 (High)

bm/bm-collective

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

bm/kaas-ipam

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

bm/syslog-ng

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

ceph/mcp/ceph-controller

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

ceph/rook

openssl

CVE-2022-3786 (High)

CVE-2023-0286 (High)

CVE-2022-3602 (High)

openssl-libs

CVE-2022-3602 (High)

CVE-2022-3786 (High)

CVE-2023-0286 (High)

cryptography

CVE-2023-2650 (High)

core/admission-controller

helm.sh/helm/v3

CVE-2021-32690 (High)

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/agent-controller

helm.sh/helm/v3

CVE-2021-32690 (High)

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/aws-cluster-api-controller

helm.sh/helm/v3

CVE-2021-32690 (High)

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/aws-credentials-controller

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/azure-cluster-api-controller

helm.sh/helm/v3

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

CVE-2021-32690 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/azure-credentials-controller

helm.sh/helm/v3

CVE-2021-32690 (High)

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/bootstrap-controller

helm.sh/helm/v3

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

CVE-2021-32690 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/byo-cluster-api-controller

helm.sh/helm/v3

CVE-2021-32690 (High)

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/byo-credentials-controller

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/ceph-kcc-controller

helm.sh/helm/v3

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

CVE-2021-32690 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/cluster-api-provider-baremetal

helm.sh/helm/v3

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

CVE-2021-32690 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/configuration-collector

helm.sh/helm/v3

CVE-2021-32690 (High)

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/equinix-cluster-api-controller

helm.sh/helm/v3

CVE-2021-32690 (High)

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/equinix-credentials-controller

helm.sh/helm/v3

CVE-2021-32690 (High)

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/event-controller

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/external/nginx

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

libx11

CVE-2023-3138 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

core/frontend

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

libx11

CVE-2023-3138 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

core/iam-controller

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/kaas-exporter

helm.sh/helm/v3

CVE-2021-32690 (High)

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/kproxy

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/lcm-controller

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/license-controller

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/machinepool-controller

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/openstack-cluster-api-controller

helm.sh/helm/v3

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

CVE-2021-32690 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/os-credentials-controller

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/portforward-controller

helm.sh/helm/v3

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

CVE-2021-32690 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/proxy-controller

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/rbac-controller

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/release-controller

helm.sh/helm/v3

CVE-2021-32690 (High)

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/rhellicense-controller

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/scope-controller

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/user-controller

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/vsphere-cluster-api-controller

helm.sh/helm/v3

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

CVE-2021-32690 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/vsphere-credentials-controller

helm.sh/helm/v3

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

CVE-2021-32690 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

core/vsphere-vm-template-controller

helm.sh/helm/v3

CVE-2021-32690 (High)

CVE-2022-23525 (High)

CVE-2022-23526 (High)

CVE-2022-23524 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

iam/keycloak

io.vertx:vertx-core

CVE-2021-4125 (High)

CVE-2021-44228 (Critical)

CVE-2021-44530 (Critical)

CVE-2021-45046 (Critical)

org.apache.cxf:cxf-core

CVE-2022-46364 (Critical)

CVE-2022-46363 (High)

org.apache.cxf:cxf-rt-transports-http

CVE-2022-46363 (High)

CVE-2022-46364 (Critical)

org.apache.santuario:xmlsec

CVE-2022-21476 (High)

CVE-2022-47966 (Critical)

org.apache.kafka:kafka-clients

CVE-2023-25194 (High)

CVE-2021-46877 (High)

CVE-2020-36518 (High)

com.fasterxml.jackson.core:jackson-databind

CVE-2023-35116 (High)

CVE-2022-42003 (High)

CVE-2022-42004 (High)

CVE-2023-35116 (High)

CVE-2022-42003 (High)

CVE-2022-42004 (High)

CVE-2023-35116 (High)

CVE-2022-42003 (High)

CVE-2022-42004 (High)

com.google.protobuf:protobuf-java

CVE-2022-3509 (High)

CVE-2022-3510 (High)

com.google.protobuf:protobuf-java-util

CVE-2022-3509 (High)

CVE-2022-3510 (High)

org.yaml:snakeyaml

CVE-2022-25857 (High)

java-11-openjdk-headless

CVE-2023-21930 (High)

platform-python

CVE-2023-24329 (High)

python3-libs

CVE-2023-24329 (High)

lcm/docker/ucp

curl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-23914 (Critical)

CVE-2023-28319 (High)

libcurl

CVE-2023-28319 (High)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-23914 (Critical)

github.com/crewjam/saml

CVE-2022-41912 (Critical)

CVE-2023-28119 (High)

libcrypto1.1

CVE-2023-0464 (High)

CVE-2023-2650 (High)

libssl1.1

CVE-2023-0464 (High)

CVE-2023-2650 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

github.com/opencontainers/runc

CVE-2023-28642 (High)

github.com/docker/cli

CVE-2021-41092 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

lcm/docker/ucp-agent

curl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

CVE-2023-23914 (Critical)

libcurl

CVE-2023-23914 (Critical)

CVE-2023-28319 (High)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

github.com/crewjam/saml

CVE-2022-41912 (Critical)

CVE-2023-28119 (High)

libcrypto1.1

CVE-2023-2650 (High)

CVE-2023-0464 (High)

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

libssl1.1

CVE-2023-0464 (High)

CVE-2023-2650 (High)

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

github.com/opencontainers/runc

CVE-2023-28642 (High)

github.com/docker/cli

CVE-2021-41092 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

lcm/docker/ucp-auth

curl

CVE-2023-23914 (Critical)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

libcurl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

CVE-2023-23914 (Critical)

github.com/crewjam/saml

CVE-2022-41912 (Critical)

CVE-2023-28119 (High)

libcrypto1.1

CVE-2023-0464 (High)

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-2650 (High)

libssl1.1

CVE-2023-2650 (High)

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-0464 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

lcm/docker/ucp-auth-store

github.com/crewjam/saml

CVE-2023-28119 (High)

CVE-2022-41912 (Critical)

curl

CVE-2023-28319 (High)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

libcurl

CVE-2023-28319 (High)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

libcrypto1.1

CVE-2023-2650 (High)

CVE-2023-0464 (High)

libssl1.1

CVE-2023-0464 (High)

CVE-2023-2650 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

lcm/docker/ucp-azure-ip-allocator

curl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

libcurl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

libcrypto1.1

CVE-2023-2650 (High)

CVE-2023-0464 (High)

libssl1.1

CVE-2023-2650 (High)

CVE-2023-0464 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

lcm/docker/ucp-calico-cni

github.com/emicklei/go-restful

CVE-2022-1996 (Critical)

golang.org/x/crypto

CVE-2022-27191 (High)

CVE-2020-29652 (High)

CVE-2021-43565 (High)

golang.org/x/text

CVE-2022-32149 (High)

CVE-2020-14040 (High)

CVE-2021-38561 (High)

CVE-2022-32149 (High)

golang.org/x/net

CVE-2022-27664 (High)

CVE-2021-33194 (High)

CVE-2022-27664 (High)

github.com/containernetworking/cni

CVE-2021-20206 (High)

github.com/gogo/protobuf

CVE-2021-3121 (High)

lcm/docker/ucp-calico-kube-controllers

github.com/emicklei/go-restful

CVE-2022-1996 (Critical)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

lcm/docker/ucp-calico-node

github.com/emicklei/go-restful

CVE-2022-1996 (Critical)

openssl-libs

CVE-2023-0286 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

lcm/docker/ucp-cfssl

curl

CVE-2023-28319 (High)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-23914 (Critical)

libcurl

CVE-2023-23914 (Critical)

CVE-2023-28319 (High)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

libcrypto1.1

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-0464 (High)

CVE-2023-2650 (High)

libssl1.1

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-0464 (High)

CVE-2023-2650 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

lcm/docker/ucp-compose

github.com/emicklei/go-restful

CVE-2022-1996 (Critical)

golang.org/x/crypto

CVE-2021-43565 (High)

CVE-2022-27191 (High)

CVE-2021-43565 (High)

CVE-2022-27191 (High)

golang.org/x/net

CVE-2021-33194 (High)

CVE-2022-27664 (High)

CVE-2021-33194 (High)

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

CVE-2021-38561 (High)

CVE-2022-32149 (High)

CVE-2021-38561 (High)

CVE-2022-32149 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

github.com/prometheus/client_golang

CVE-2022-21698 (High)

lcm/docker/ucp-containerd-shim-process

golang.org/x/net

CVE-2021-33194 (High)

CVE-2022-27664 (High)

CVE-2021-33194 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

lcm/docker/ucp-controller

curl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

CVE-2023-23914 (Critical)

libcurl

CVE-2023-23914 (Critical)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

github.com/crewjam/saml

CVE-2022-41912 (Critical)

CVE-2023-28119 (High)

libcrypto1.1

CVE-2023-2650 (High)

CVE-2023-0464 (High)

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

libssl1.1

CVE-2023-2650 (High)

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-0464 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

github.com/opencontainers/runc

CVE-2023-28642 (High)

github.com/docker/cli

CVE-2021-41092 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

lcm/docker/ucp-coredns

golang.org/x/net

CVE-2022-27664 (High)

CVE-2022-41721 (High)

golang.org/x/text

CVE-2022-32149 (High)

lcm/docker/ucp-dsinfo

github.com/emicklei/go-restful

CVE-2022-1996 (Critical)

golang.org/x/crypto

CVE-2021-43565 (High)

CVE-2022-27191 (High)

CVE-2021-43565 (High)

golang.org/x/net

CVE-2022-27664 (High)

CVE-2021-33194 (High)

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

CVE-2021-38561 (High)

CVE-2022-32149 (High)

CVE-2021-38561 (High)

CVE-2022-32149 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

github.com/prometheus/client_golang

CVE-2022-21698 (High)

lcm/docker/ucp-etcd

curl

CVE-2023-28319 (High)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-23914 (Critical)

libcurl

CVE-2023-28319 (High)

CVE-2023-23914 (Critical)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

libcrypto1.1

CVE-2023-2650 (High)

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-0464 (High)

libssl1.1

CVE-2023-0464 (High)

CVE-2023-2650 (High)

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

golang.org/x/text

CVE-2022-32149 (High)

CVE-2021-38561 (High)

CVE-2022-32149 (High)

CVE-2021-38561 (High)

CVE-2022-32149 (High)

golang.org/x/net

CVE-2022-27664 (High)

lcm/docker/ucp-hardware-info

curl

CVE-2023-28319 (High)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-23914 (Critical)

libcurl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-23914 (Critical)

CVE-2023-28319 (High)

libcrypto1.1

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-2650 (High)

CVE-2023-0464 (High)

libssl1.1

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-2650 (High)

CVE-2023-0464 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

github.com/docker/docker

CVE-2023-28840 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

lcm/docker/ucp-interlock

curl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

CVE-2023-23914 (Critical)

libcurl

CVE-2023-28319 (High)

CVE-2023-23914 (Critical)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

libcrypto1.1

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-2650 (High)

CVE-2023-0464 (High)

libssl1.1

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-2650 (High)

CVE-2023-0464 (High)

golang.org/x/net

CVE-2022-41721 (High)

CVE-2022-27664 (High)

github.com/containerd/containerd

CVE-2023-25173 (High)

golang.org/x/text

CVE-2022-32149 (High)

lcm/docker/ucp-interlock-config

curl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

libcurl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

libcrypto1.1

CVE-2023-2650 (High)

CVE-2023-0464 (High)

libssl1.1

CVE-2023-2650 (High)

CVE-2023-0464 (High)

libwebp

CVE-2023-1999 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

lcm/docker/ucp-interlock-extension

curl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

CVE-2023-23914 (Critical)

libcurl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-23914 (Critical)

CVE-2023-28319 (High)

libcrypto1.1

CVE-2023-2650 (High)

CVE-2023-0464 (High)

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

libssl1.1

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-0464 (High)

CVE-2023-2650 (High)

golang.org/x/net

CVE-2022-41721 (High)

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

lcm/docker/ucp-interlock-proxy

curl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

libcurl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

libcrypto1.1

CVE-2023-2650 (High)

CVE-2023-0464 (High)

libssl1.1

CVE-2023-0464 (High)

CVE-2023-2650 (High)

libwebp

CVE-2023-1999 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

lcm/docker/ucp-kube-ingress-controller

curl

CVE-2022-43551 (High)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-23914 (Critical)

CVE-2022-32221 (Critical)

CVE-2022-42915 (High)

CVE-2022-42916 (High)

CVE-2023-28319 (High)

libcurl

CVE-2022-32221 (Critical)

CVE-2022-42915 (High)

CVE-2022-42916 (High)

CVE-2023-23914 (Critical)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

CVE-2022-43551 (High)

libcrypto1.1

CVE-2023-0464 (High)

CVE-2023-2650 (High)

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

libssl1.1

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-0464 (High)

CVE-2023-2650 (High)

openssl

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-2650 (High)

CVE-2023-0464 (High)

golang.org/x/net

CVE-2022-41721 (High)

CVE-2022-27664 (High)

libxml2

CVE-2022-40303 (High)

CVE-2022-40304 (High)

github.com/opencontainers/runc

CVE-2023-28642 (High)

golang.org/x/text

CVE-2022-32149 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

lcm/docker/ucp-metrics

curl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

libcurl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

libcrypto1.1

CVE-2023-0464 (High)

CVE-2023-2650 (High)

libssl1.1

CVE-2023-2650 (High)

CVE-2023-0464 (High)

github.com/docker/docker

CVE-2023-28840 (High)

golang.org/x/net

CVE-2022-41723 (High)

lcm/docker/ucp-node-feature-discovery

libssl3

CVE-2023-0286 (High)

openssl

CVE-2023-0286 (High)

github.com/prometheus/client_golang

CVE-2022-21698 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

gopkg.in/yaml.v3

CVE-2022-28948 (High)

lcm/docker/ucp-nvidia-device-plugin

golang.org/x/net

CVE-2022-27664 (High)

CVE-2021-33194 (High)

golang.org/x/text

CVE-2022-32149 (High)

CVE-2021-38561 (High)

libssl3

CVE-2023-0286 (High)

openssl

CVE-2023-0286 (High)

github.com/prometheus/client_golang

CVE-2022-21698 (High)

lcm/docker/ucp-nvidia-gpu-feature-discovery

golang.org/x/net

CVE-2022-41721 (High)

CVE-2022-27664 (High)

libssl3

CVE-2023-0286 (High)

openssl

CVE-2023-0286 (High)

golang.org/x/text

CVE-2022-32149 (High)

lcm/docker/ucp-secureoverlay-agent

curl

CVE-2023-28319 (High)

CVE-2023-23914 (Critical)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

libcurl

CVE-2023-28319 (High)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-23914 (Critical)

libcrypto1.1

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-0464 (High)

CVE-2023-2650 (High)

libssl1.1

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-0464 (High)

CVE-2023-2650 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

lcm/docker/ucp-secureoverlay-mgr

curl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-23914 (Critical)

CVE-2023-28319 (High)

libcurl

CVE-2023-23914 (Critical)

CVE-2023-28319 (High)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

libcrypto1.1

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-0464 (High)

CVE-2023-2650 (High)

libssl1.1

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-0464 (High)

CVE-2023-2650 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

lcm/docker/ucp-sf-notifier

Werkzeug

CVE-2022-29361 (Critical)

CVE-2023-25577 (High)

libcrypto1.1

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-0464 (High)

CVE-2023-2650 (High)

libssl1.1

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-2650 (High)

CVE-2023-0464 (High)

openssl-dev

CVE-2023-0464 (High)

CVE-2023-2650 (High)

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

cryptography

CVE-2023-2650 (High)

Flask

CVE-2023-30861 (High)

krb5-libs

CVE-2022-42898 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

wheel

CVE-2022-40898 (High)

lcm/docker/ucp-swarm

curl

CVE-2023-23914 (Critical)

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

libcurl

CVE-2023-27533 (High)

CVE-2023-27534 (High)

CVE-2023-27536 (High)

CVE-2023-28319 (High)

CVE-2023-23914 (Critical)

libcrypto1.1

CVE-2023-0464 (High)

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-2650 (High)

libssl1.1

CVE-2023-0464 (High)

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-2650 (High)

github.com/hashicorp/consul

CVE-2022-29153 (High)

CVE-2022-38149 (High)

CVE-2020-7219 (High)

CVE-2021-37219 (High)

golang.org/x/crypto

CVE-2022-27191 (High)

CVE-2020-29652 (High)

CVE-2021-43565 (High)

golang.org/x/net

CVE-2021-33194 (High)

CVE-2022-27664 (High)

github.com/docker/docker

CVE-2023-28840 (High)

github.com/docker/distribution

CVE-2017-11468 (High)

lcm/external/aws-cloud-controller-manager

github.com/emicklei/go-restful

CVE-2022-1996 (Critical)

golang.org/x/crypto

CVE-2021-43565 (High)

CVE-2022-27191 (High)

github.com/prometheus/client_golang

CVE-2022-21698 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

gopkg.in/yaml.v3

CVE-2022-28948 (High)

lcm/external/aws-ebs-csi-driver

ncurses-libs

CVE-2023-29491 (High)

systemd-libs

CVE-2023-26604 (High)

golang.org/x/net

CVE-2022-41721 (High)

golang.org/x/text

CVE-2022-32149 (High)

lcm/external/csi-attacher

golang.org/x/crypto

CVE-2021-43565 (High)

CVE-2022-27191 (High)

CVE-2020-29652 (High)

CVE-2021-43565 (High)

CVE-2022-27191 (High)

CVE-2020-29652 (High)

CVE-2021-43565 (High)

CVE-2022-27191 (High)

CVE-2020-29652 (High)

golang.org/x/net

CVE-2021-33194 (High)

golang.org/x/text

CVE-2021-38561 (High)

github.com/gogo/protobuf

CVE-2021-3121 (High)

github.com/emicklei/go-restful

CVE-2022-1996 (Critical)

lcm/external/csi-provisioner

github.com/emicklei/go-restful

CVE-2022-1996 (Critical)

lcm/external/csi-resizer

github.com/emicklei/go-restful

CVE-2022-1996 (Critical)

lcm/helm/tiller

libcrypto1.1

CVE-2021-23840 (High)

CVE-2020-1967 (High)

CVE-2021-3450 (High)

CVE-2021-3711 (Critical)

CVE-2021-3712 (High)

libssl1.1

CVE-2020-1967 (High)

CVE-2021-3450 (High)

CVE-2021-3711 (Critical)

CVE-2021-3712 (High)

CVE-2021-23840 (High)

apk-tools

CVE-2021-36159 (Critical)

CVE-2021-30139 (High)

zlib

CVE-2022-37434 (Critical)

busybox

CVE-2021-42378 (High)

CVE-2021-42379 (High)

CVE-2021-42380 (High)

CVE-2021-42381 (High)

CVE-2021-42382 (High)

CVE-2021-42383 (High)

CVE-2021-42384 (High)

CVE-2021-42385 (High)

CVE-2021-42386 (High)

CVE-2021-28831 (High)

ssl_client

CVE-2021-28831 (High)

CVE-2021-42378 (High)

CVE-2021-42379 (High)

CVE-2021-42380 (High)

CVE-2021-42381 (High)

CVE-2021-42382 (High)

CVE-2021-42383 (High)

CVE-2021-42384 (High)

CVE-2021-42385 (High)

CVE-2021-42386 (High)

lcm/kubernetes/cinder-csi-plugin-amd64

libtasn1-6

CVE-2021-46848 (Critical)

github.com/emicklei/go-restful

CVE-2022-1996 (Critical)

libssl1.1

CVE-2023-0286 (High)

CVE-2022-4450 (High)

CVE-2023-0215 (High)

openssl

CVE-2023-0286 (High)

CVE-2022-4450 (High)

CVE-2023-0215 (High)

libsystemd0

CVE-2023-26604 (High)

libudev1

CVE-2023-26604 (High)

udev

CVE-2023-26604 (High)

libgnutls30

CVE-2023-0361 (High)

golang.org/x/net

CVE-2022-27664 (High)

golang.org/x/text

CVE-2022-32149 (High)

gopkg.in/yaml.v3

CVE-2022-28948 (High)

lcm/kubernetes/openstack-cloud-controller-manager-amd64

github.com/emicklei/go-restful

CVE-2022-1996 (Critical)

zlib

CVE-2022-37434 (Critical)

golang.org/x/crypto

CVE-2022-27191 (High)

CVE-2021-43565 (High)

golang.org/x/text

CVE-2021-38561 (High)

CVE-2022-32149 (High)

github.com/prometheus/client_golang

CVE-2022-21698 (High)

golang.org/x/net

CVE-2022-27664 (High)

gopkg.in/yaml.v3

CVE-2022-28948 (High)

k8s.io/kubernetes

CVE-2021-25741 (High)

lcm/mcc-haproxy

pcre2

CVE-2022-1586 (Critical)

CVE-2022-1587 (Critical)

zlib

CVE-2022-37434 (Critical)

libcrypto1.1

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-2650 (High)

CVE-2023-0464 (High)

libssl1.1

CVE-2022-4450 (High)

CVE-2023-0215 (High)

CVE-2023-0286 (High)

CVE-2023-2650 (High)

CVE-2023-0464 (High)

busybox

CVE-2022-30065 (High)

ssl_client

CVE-2022-30065 (High)

lcm/registry

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

mirantis/ceph

openssl

CVE-2022-3786 (High)

CVE-2023-0286 (High)

CVE-2022-3602 (High)

openssl-libs

CVE-2022-3602 (High)

CVE-2022-3786 (High)

CVE-2023-0286 (High)

python3

CVE-2023-24329 (High)

python3-devel

CVE-2023-24329 (High)

python3-libs

CVE-2023-24329 (High)

mirantis/cephcsi

openssl

CVE-2022-3786 (High)

CVE-2023-0286 (High)

CVE-2022-3602 (High)

openssl-libs

CVE-2022-3602 (High)

CVE-2022-3786 (High)

CVE-2023-0286 (High)

cryptography

CVE-2023-2650 (High)

mirantis/fio

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

stacklight/alerta-web

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

stacklight/alertmanager-webhook-servicenow

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

openssl-dev

CVE-2023-2650 (High)

Flask

CVE-2023-30861 (High)

stacklight/alpine-utils

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

stacklight/blackbox-exporter

golang.org/x/net

CVE-2022-41723 (High)

stacklight/cadvisor

libcrypto1.1

CVE-2023-2650 (High)

libssl1.1

CVE-2023-2650 (High)

stacklight/cerebro

org.xerial:sqlite-jdbc

CVE-2023-32697 (Critical)

com.fasterxml.jackson.core:jackson-databind

CVE-2023-35116 (High)

CVE-2022-42003 (High)

CVE-2022-42004 (High)

CVE-2020-36518 (High)

CVE-2021-46877 (High)

libssl1.1

CVE-2023-2650 (High)

CVE-2023-0464 (High)

openssl

CVE-2023-2650 (High)

CVE-2023-0464 (High)

stacklight/ironic-prometheus-exporter

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

stacklight/k8s-sidecar

libcrypto1.1

CVE-2023-2650 (High)

libssl1.1

CVE-2023-2650 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

stacklight/kubectl

libssl1.1

CVE-2023-2650 (High)

CVE-2023-0464 (High)

openssl

CVE-2023-2650 (High)

CVE-2023-0464 (High)

stacklight/metric-collector

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

stacklight/node-exporter

golang.org/x/net

CVE-2022-41723 (High)

stacklight/opensearch

org.codelibs.elasticsearch.module:ingest-common

CVE-2019-7611 (High)

CVE-2015-5377 (Critical)

org.springframework:spring-core

CVE-2023-20860 (High)

stacklight/opensearch-dashboards

decode-uri-component

CVE-2022-38900 (High)

glob-parent

CVE-2021-35065 (High)

stacklight/prometheus

github.com/docker/docker

CVE-2023-28840 (High)

golang.org/x/net

CVE-2022-41723 (High)

stacklight/prometheus-es-exporter

libcrypto1.1

CVE-2023-2650 (High)

libssl1.1

CVE-2023-2650 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

stacklight/prometheus-libvirt-exporter

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

stacklight/prometheus-patroni-exporter

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

stacklight/prometheus-relay

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

stacklight/sf-notifier

libcrypto1.1

CVE-2023-2650 (High)

libssl1.1

CVE-2023-2650 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

openssl-dev

CVE-2023-2650 (High)

stacklight/sf-reporter

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

stacklight/stacklight-toolkit

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

stacklight/telegraf

libssl1.1

CVE-2023-2650 (High)

CVE-2023-0464 (High)

CVE-2023-2650 (High)

CVE-2023-0464 (High)

openssl

CVE-2023-2650 (High)

CVE-2023-0464 (High)

CVE-2023-2650 (High)

CVE-2023-0464 (High)

stacklight/telemeter

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

stacklight/tungstenfabric-prometheus-exporter

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

stacklight/yq

libcrypto3

CVE-2023-2650 (High)

libssl3

CVE-2023-2650 (High)

Update notes

This section describes the specific actions you as a cloud operator need to complete before or after your Container Cloud cluster update to the Cluster release 14.0.0.

Consider this information as a supplement to the generic update procedures published in Operations Guide: Automatic upgrade of a management cluster and Update a managed cluster.

Pre-update actions
Update L2 templates on existing bare metal clusters

Since Container Cloud 2.24.0, the use of the l3Layout section in L2 templates is mandatory. Therefore, if your L2 templates do not contain this section, manually add it for all existing clusters by defining all subnets that are used in the npTemplate section of the L2 template.

Example L2 template with the l3Layout section
apiVersion: ipam.mirantis.com/v1alpha1
kind: L2Template
metadata:
  labels:
    bm-1490-template-controls-netplan: anymagicstring
    cluster.sigs.k8s.io/cluster-name: managed-cluster
    kaas.mirantis.com/provider: baremetal
    kaas.mirantis.com/region: region-one
  name: bm-1490-template-controls-netplan
  namespace: managed-ns
spec:
  ifMapping:
  - enp9s0f0
  - enp9s0f1
  - eno1
  - ens3f1
  l3Layout:
  - scope: namespace
    subnetName: lcm-nw
  - scope: namespace
    subnetName: storage-frontend
  - scope: namespace
    subnetName: storage-backend
  - scope: namespace
    subnetName: metallb-public-for-extiface
  npTemplate: |-
    version: 2
    ethernets:
      {{nic 0}}:
        dhcp4: false
        dhcp6: false
        match:
          macaddress: {{mac 0}}
        set-name: {{nic 0}}
        mtu: 1500
      {{nic 1}}:
        dhcp4: false
        dhcp6: false
        match:
          macaddress: {{mac 1}}
        set-name: {{nic 1}}
        mtu: 1500
      {{nic 2}}:
        dhcp4: false
        dhcp6: false
        match:
          macaddress: {{mac 2}}
        set-name: {{nic 2}}
        mtu: 1500
      {{nic 3}}:
        dhcp4: false
        dhcp6: false
        match:
          macaddress: {{mac 3}}
        set-name: {{nic 3}}
        mtu: 1500
    bonds:
      bond0:
        parameters:
          mode: 802.3ad
          #transmit-hash-policy: layer3+4
          #mii-monitor-interval: 100
        interfaces:
          - {{ nic 0 }}
          - {{ nic 1 }}
      bond1:
        parameters:
          mode: 802.3ad
          #transmit-hash-policy: layer3+4
          #mii-monitor-interval: 100
        interfaces:
          - {{ nic 2 }}
          - {{ nic 3 }}
    vlans:
      stor-f:
        id: 1494
        link: bond1
        addresses:
          - {{ip "stor-f:storage-frontend"}}
      stor-b:
        id: 1489
        link: bond1
        addresses:
          - {{ip "stor-b:storage-backend"}}
      m-pub:
        id: 1491
        link: bond0
    bridges:
      k8s-ext:
        interfaces: [m-pub]
        addresses:
          - {{ ip "k8s-ext:metallb-public-for-extiface" }}
      k8s-lcm:
        dhcp4: false
        dhcp6: false
        gateway4: {{ gateway_from_subnet "lcm-nw" }}
        addresses:
          - {{ ip "k8s-lcm:lcm-nw" }}
        nameservers:
          addresses: [ 172.18.176.6 ]
        interfaces:
            - bond0

For details on L2 template configuration, see Create L2 templates.

Caution

Partial definition of subnets is prohibited.

2.23.5

Container Cloud 2.23.5 is the fourth patch release of the 2.23.x release series that incorporates security fixes for CVEs of Critical and High severity. This patch release:

This section describes known issues and contains the lists of updated artifacts and CVE fixes for the Container Cloud release 2.23.5. For CVE fixes delivered with the previous patch release, see security notes for 2.23.4, 2.23.3, and 2.23.2.

For enhancements, addressed and known issues of the parent Container Cloud release 2.23.0, refer to 2.23.0.

Artifacts

This section lists the components artifacts of the Container Cloud patch release 2.23.5. For artifacts of the Cluster releases introduced in 2.23.5, see Cluster release 12.7.4 and Cluster release 11.7.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-api Updated

https://binary.mirantis.com/core/helm/baremetal-api-1.36.27.tgz

baremetal-operator Updated

https://binary.mirantis.com/core/helm/baremetal-operator-1.36.27.tgz

baremetal-public-api Updated

https://binary.mirantis.com/core/helm/baremetal-public-api-1.36.27.tgz

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20230126190304

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20230126190304

kaas-ipam Updated

https://binary.mirantis.com/core/helm/kaas-ipam-1.36.27.tgz

local-volume-provisioner Updated

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.36.27.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.36.27.tgz

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-104-6e2e82c.tgz

Docker images

ambassador Updated

mirantis.azurecr.io/core/external/nginx:1.36.27

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-alpine-20230522161215

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-focal-20230522160916

bm-collective Updated

mirantis.azurecr.io/bm/bm-collective:base-alpine-20230522161437

ironic Updated

mirantis.azurecr.io/openstack/ironic:yoga-focal-20230523063451

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:yoga-focal-20230523063451

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20230330140456

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-focal-20230522161025

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-27d64fb-20230421151539

mariadb

mirantis.azurecr.io/general/mariadb:10.6.12-focal-20230423170220

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.19.0-5-g6a7e17d

metallb-controller Updated

mirantis.azurecr.io/bm/external/metallb/controller:v0.13.7-3

metallb-speaker Updated

mirantis.azurecr.io/bm/external/metallb/speaker:v0.13.7-3

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20230424092635

Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.36.28.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.36.28.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.36.27.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.36.27.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.36.27.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.36.27.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.36.27.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.36.27.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.36.27.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.36.27.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.36.27.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.36.27.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.36.27.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.36.27.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.36.27.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.36.27.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.36.27.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.36.27.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.36.27.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.36.27.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.36.27.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.36.27.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.36.27.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.36.27.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.36.27.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.36.27.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.36.27.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.36.27.tgz

scope-controller

http://binary.mirantis.com/core/helm/scope-controller-1.36.27.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.36.27.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.36.27.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.36.27.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.36.27.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.36.27

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.36.27

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.36.27

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.36.27

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.36.27

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.6.1

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.36.27

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.36.27

configuration-collector Updated

mirantis.azurecr.io/core/configuration-collector:1.36.27

event-controller Updated

mirantis.azurecr.io/core/event-controller:1.36.27

frontend Updated

mirantis.azurecr.io/core/frontend:1.36.27

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.36.27

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.36.27

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.36.27

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.36.27

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.36.27

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.36.27

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.17.0-8-g6ca89d5

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.19.0-5-g6a7e17d

metrics-server

mirantis.azurecr.io/core/external/metrics-server:v0.6.3-2

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.36.27

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager-amd64:v1.22.1-7-gc11024f8

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.36.27

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.36.27

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.36.27

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.36.27

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.36.27

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-3

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.36.27

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.36.27

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.36.27

storage-discovery Deprecated

mirantis.azurecr.io/core/storage-discovery:1.36.27

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.36.27

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.36.27

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.36.27

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

Helm charts

iam

http://binary.mirantis.com/iam/helm/iam-2.4.43.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.16.tgz

Docker images

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.0-20200311160233

mariadb

mirantis.azurecr.io/general/mariadb:10.6.12-focal-20230331112513

keycloak

mirantis.azurecr.io/iam/keycloak:0.5.16

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-4

Security notes

In the Container Cloud patch release 2.23.5, 70 vendor-specific Common Vulnerabilities and Exposures (CVE) have been addressed: 7 of critical and 63 of high severity.

The full list of the CVEs present in the current Container Cloud release is available at the Mirantis Security Portal.

Addressed CVEs

Image

Component name

CVE

bm/baremetal-dnsmasq

curl

CVE-2023-28319 (High)

CVE-2023-28321 (High)

CVE-2023-28322 (High)

libcurl

CVE-2023-28319 (High)

CVE-2023-28321 (High)

CVE-2023-28322 (High)

libcap2

CVE-2023-2603 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

bm/baremetal-operator

openssh-client-common

CVE-2023-28531 (Critical)

openssh-client-default

CVE-2023-28531 (Critical)

openssh-keygen

CVE-2023-28531 (Critical)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

core/external/nginx

libwebp

CVE-2023-1999 (Critical)

curl

CVE-2023-28319 (High)

CVE-2023-28321 (High)

CVE-2023-28322 (High)

libcurl

CVE-2023-28319 (High)

CVE-2023-28321 (High)

CVE-2023-28322 (High)

core/frontend

libwebp

CVE-2023-1999 (Critical)

curl

CVE-2023-28319 (High)

CVE-2023-28321 (High)

CVE-2023-28322 (High)

libcurl

CVE-2023-28319 (High)

CVE-2023-28321 (High)

CVE-2023-28322 (High)

openstack/ironic

sqlparse

CVE-2023-30608 (High)

openstack/ironic-inspector

Flask

CVE-2023-30861 (High)

sqlparse

CVE-2023-30608 (High)

stacklight/alerta-web

libcurl

CVE-2023-28319 (High)

CVE-2023-28321 (High)

CVE-2023-28322 (High)

libpq

CVE-2023-2454 (High)

postgresql15-client

CVE-2023-2454 (High)

Flask

CVE-2023-30861 (High)

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

stacklight/alertmanager-webhook-servicenow

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

stacklight/alpine-utils

curl

CVE-2023-28319 (High)

CVE-2023-28321 (High)

CVE-2023-28322 (High)

libcurl

CVE-2023-28319 (High)

CVE-2023-28321 (High)

CVE-2023-28322 (High)

stacklight/opensearch

org.apache.santuario:xmlsec

CVE-2022-47966 (Critical)

CVE-2022-21476 (High)

org.slf4j:slf4j-api

CVE-2018-8088 (Critical)

glib2

CVE-2018-16428 (High)

CVE-2018-16429 (High)

stacklight/opensearch-dashboards

glib2

CVE-2018-16428 (High)

CVE-2018-16429 (High)

stacklight/pgbouncer

libpq

CVE-2023-2454 (High)

postgresql-client

CVE-2023-2454 (High)

stacklight/prometheus-libvirt-exporter

libcurl

CVE-2023-28319 (High)

CVE-2023-28321 (High)

CVE-2023-28322 (High)

stacklight/prometheus-patroni-exporter

ncurses-libs

CVE-2023-29491 (High)

ncurses-terminfo-base

CVE-2023-29491 (High)

stacklight/sf-notifier

flask

CVE-2023-30861 (High)

stacklight/stacklight-toolkit

curl

CVE-2023-28319 (High)

CVE-2023-28321 (High)

CVE-2023-28322 (High)

libcurl

CVE-2023-28319 (High)

CVE-2023-28321 (High)

CVE-2023-28322 (High)

stacklight/telegraf

github.com/docker/docker

CVE-2023-28840 (High)

CVE-2023-28840 (High)

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.23.5 including the Cluster releases 12.7.4 and 11.7.4.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.


[32761] Bare-metal nodes stuck in the cleaning state

During the initial deployment of Container Cloud, some nodes may get stuck in the cleaning state. As a workaround, wipe disks manually before initializing the Container Cloud bootstrap.

2.23.4

Container Cloud 2.23.4 is the third patch release of the 2.23.x release series that includes several addressed issues and incorporates security fixes for CVEs of Critical and High severity. This patch release:

  • Introduces the patch Cluster release 12.7.3 for MOSK 23.1.3.

  • Introduces the patch Cluster release 11.7.3

  • Supports the latest major Cluster releases 12.7.0, 11.7.0.

  • Does not support greenfield deployments based on deprecated Cluster releases 12.7.2, 11.7.2, 12.7.1, 11.7.1, 12.5.0, and 11.6.0. Use the latest available Cluster releases of the series instead.

This section describes addressed issues and contains the lists of updated artifacts and CVE fixes for the Container Cloud release 2.23.4. For CVE fixes delivered with the previous patch release, see security notes for 2.23.3 and 2.23.2.

For enhancements, addressed and known issues of the parent Container Cloud release 2.23.0, refer to 2.23.0.

Artifacts

This section lists the components artifacts of the Container Cloud patch release 2.23.4. For artifacts of the Cluster releases introduced in 2.23.4, see Cluster release 12.7.3 and Cluster release 11.7.3.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-api Updated

https://binary.mirantis.com/core/helm/baremetal-api-1.36.26.tgz

baremetal-operator Updated

https://binary.mirantis.com/core/helm/baremetal-operator-1.36.26.tgz

baremetal-public-api Updated

https://binary.mirantis.com/core/helm/baremetal-public-api-1.36.26.tgz

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20230126190304

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20230126190304

kaas-ipam Updated

https://binary.mirantis.com/core/helm/kaas-ipam-1.36.26.tgz

local-volume-provisioner Updated

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.36.26.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.36.26.tgz

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-104-6e2e82c.tgz

Docker images

ambassador Updated

mirantis.azurecr.io/core/external/nginx:1.36.26

baremetal-dnsmasq

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-alpine-20230421100738

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-focal-20230421100444

bm-collective

mirantis.azurecr.io/bm/bm-collective:base-alpine-20230421101033

ironic

mirantis.azurecr.io/openstack/ironic:yoga-focal-20230417060018

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:yoga-focal-20230417060018

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20230330140456

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-focal-20230421100530

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.12-focal-20230423170220

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.19.0-5-g6a7e17d

metallb-controller

mirantis.azurecr.io/bm/external/metallb/controller:v0.13.7-20221130155702-refresh-2023033102

metallb-speaker

mirantis.azurecr.io/bm/external/metallb/speaker:v0.13.7-20221130155702-refresh-2023033102

syslog-ng Updated

mirantis.azurecr.io/bm/syslog-ng:base-alpine-20230424092635

Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.36.26.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.36.26.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.36.26.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.36.26.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.36.26.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.36.26.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.36.26.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.36.26.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.36.26.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.36.26.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.36.26.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.36.26.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.36.26.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.36.26.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.36.26.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.36.26.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.36.26.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.36.26.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.36.26.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.36.26.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.36.26.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.36.26.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.36.26.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.36.26.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.36.26.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.36.26.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.36.26.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.36.26.tgz

scope-controller

http://binary.mirantis.com/core/helm/scope-controller-1.36.26.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.36.26.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.36.26.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.36.26.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.36.26.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.36.26

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.36.26

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.36.26

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.36.26

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.36.26

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.6.1

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.36.26

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.36.26

frontend Updated

mirantis.azurecr.io/core/frontend:1.36.26

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.36.26

kaas-exporter Updated

mirantis.azurecr.io/core/kaas-exporter:1.36.26

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.36.26

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.36.26

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.36.26

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.36.26

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.17.0-8-g6ca89d5

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.19.0-5-g6a7e17d

metrics-server

mirantis.azurecr.io/core/external/metrics-server:v0.6.3-2

nginx Updated

mirantis.azurecr.io/core/external/nginx:1.36.26

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager-amd64:v1.22.1-7-gc11024f8

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.36.26

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.36.26

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.36.26

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.36.26

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.36.26

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-3

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.36.26

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.36.26

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.36.26

storage-discovery Deprecated

mirantis.azurecr.io/core/storage-discovery:1.36.26

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.36.26

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.36.26

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.36.26

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

Helm charts

iam

http://binary.mirantis.com/iam/helm/iam-2.4.43.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.16.tgz

Docker images

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.0-20200311160233

mariadb

mirantis.azurecr.io/general/mariadb:10.6.12-focal-20230331112513

keycloak

mirantis.azurecr.io/iam/keycloak:0.5.16

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-4

Security notes

In the Container Cloud patch release 2.23.4, 35 vendor-specific CVEs have been addressed, 1 of critical and 34 of high severity.

The full list of the CVEs present in the current Container Cloud release is available at the Mirantis Security Portal.

Addressed issues

The following issues have been addressed in the Container Cloud patch release 2.23.4 along with the Cluster releases 12.7.3 and 11.7.3:

  • [31869] Fixed the issue with agent-controller failing to obtain secrets due to the incorrect indexer initialization.

  • [31810,30970] Fixed the issue with hardware.storage flapping in the machine status and causing constant reconciles.

  • [30474,28654] Fixed the issue with the agent-controller secrets leaking.

  • [5771] Fixed the issue with unnecessary reconciles during compute node deployment by optimizing the baremetal-provider operation.

2.23.3

Container Cloud 2.23.3 is the second patch release of the 2.23.x release series that incorporates security fixes for CVEs of Critical and High severity. This patch release:

  • Introduces the patch Cluster release 12.7.2 for MOSK 23.1.2.

  • Introduces the patch Cluster release 11.7.2.

  • Supports the latest major Cluster releases 12.7.0, 11.7.0.

  • Does not support greenfield deployments based on deprecated Cluster releases 12.7.1, 11.7.1, 12.5.0, and 11.6.0. Use the latest available Cluster releases of the series instead.

This section contains the lists of updated artifacts and CVE fixes for the Container Cloud release 2.23.3. For CVE fixes delivered with the previous patch release, see security notes for 2.23.2. For enhancements, addressed and known issues of the parent Container Cloud release 2.23.0, refer to 2.23.0.

Artifacts

This section lists the components artifacts of the Container Cloud patch release 2.23.3. For artifacts of the Cluster releases introduced in 2.23.3, see Cluster release 12.7.2 and Cluster release 11.7.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-api Updated

https://binary.mirantis.com/core/helm/baremetal-api-1.36.23.tgz

baremetal-operator Updated

https://binary.mirantis.com/core/helm/baremetal-operator-1.36.23.tgz

baremetal-public-api Updated

https://binary.mirantis.com/core/helm/baremetal-public-api-1.36.23.tgz

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20230126190304

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20230126190304

kaas-ipam Updated

https://binary.mirantis.com/core/helm/kaas-ipam-1.36.23.tgz

local-volume-provisioner Updated

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.36.23.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.36.23.tgz

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-104-6e2e82c.tgz

Docker images

ambassador

mirantis.azurecr.io/core/external/nginx:1.36.23

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-alpine-20230421100738

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-focal-20230421100444

bm-collective Updated

mirantis.azurecr.io/bm/bm-collective:base-alpine-20230421101033

ironic Updated

mirantis.azurecr.io/openstack/ironic:yoga-focal-20230417060018

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:yoga-focal-20230417060018

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20230330140456

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-focal-20230421100530

mariadb

mirantis.azurecr.io/general/mariadb:10.6.12-focal-20230328123811

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.19.0-5-g6a7e17d

metallb-controller

mirantis.azurecr.io/bm/external/metallb/controller:v0.13.7-20221130155702-refresh-2023033102

metallb-speaker

mirantis.azurecr.io/bm/external/metallb/speaker:v0.13.7-20221130155702-refresh-2023033102

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-focal-20230316094816

Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.36.23.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.36.23.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.36.23.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.36.23.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.36.23.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.36.23.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.36.23.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.36.23.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.36.23.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.36.23.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.36.23.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.36.23.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.36.23.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.36.23.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.36.23.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.36.23.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.36.23.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.36.23.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.36.23.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.36.23.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.36.23.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.36.23.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.36.23.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.36.23.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.36.23.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.36.23.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.36.23.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.36.23.tgz

scope-controller

http://binary.mirantis.com/core/helm/scope-controller-1.36.23.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.36.23.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.36.23.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.36.23.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.36.23.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.36.23

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.36.23

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.36.23

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.36.23

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.36.23

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.6.1

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.36.23

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.36.23

frontend Updated

mirantis.azurecr.io/core/frontend:1.36.23

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.36.23

kaas-exporter

mirantis.azurecr.io/core/kaas-exporter:1.36.23

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.36.23

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.36.23

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.36.23

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.36.23

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.17.0-8-g6ca89d5

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.19.0-5-g6a7e17d

metrics-server

mirantis.azurecr.io/core/external/metrics-server:v0.6.3-2

nginx

mirantis.azurecr.io/core/external/nginx:1.36.23

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager-amd64:v1.22.1-7-gc11024f8

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.36.23

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.36.23

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.36.23

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.36.23

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.36.23

registry

mirantis.azurecr.io/lcm/registry:v2.8.1-3

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.36.23

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.36.23

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.36.23

storage-discovery Deprecated

mirantis.azurecr.io/core/storage-discovery:1.36.23

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.36.23

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.36.23

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.36.23

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

Helm charts Updated

iam

http://binary.mirantis.com/iam/helm/iam-2.4.43.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.16.tgz

Docker images

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.0-20200311160233

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.12-focal-20230331112513

keycloak

mirantis.azurecr.io/iam/keycloak:0.5.16

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-4

Security notes

In the Container Cloud patch release 2.23.3, 28 vendor-specific CVEs have been addressed, 2 of critical and 26 of high severity.

The full list of the CVEs present in the current Container Cloud release is available at the Mirantis Security Portal.

2.23.2

Container Cloud 2.23.2 is the first patch release of the 2.23.x release series that incorporates security updates for CVEs with Critical and High severity. This patch release:

  • Introduces support for patch Cluster releases 12.7.1 and 11.7.1.

  • Supports the latest major Cluster releases 12.7.0 and 11.7.0.

  • Does not support greenfield deployments based on deprecated Cluster releases 12.5.0 and 11.6.0. Use the latest available Cluster releases of the series instead.

This section contains the lists of updated artifacts and CVE fixes for the Container Cloud release 2.23.2. For enhancements, addressed and known issues of the parent Container Cloud release 2.23.0, refer to 2.23.0.

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.23.2. For artifacts of the Cluster releases introduced in 2.23.2, see Cluster release 12.7.1 and Cluster release 11.7.1.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-api Updated

https://binary.mirantis.com/core/helm/baremetal-api-1.36.14.tgz

baremetal-operator Updated

https://binary.mirantis.com/core/helm/baremetal-operator-1.36.15.tgz

baremetal-public-api Updated

https://binary.mirantis.com/core/helm/baremetal-public-api-1.36.14.tgz

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20230126190304

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20230126190304

kaas-ipam Updated

https://binary.mirantis.com/core/helm/kaas-ipam-1.36.14.tgz

local-volume-provisioner Updated

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.36.14.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.36.14.tgz

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-104-6e2e82c.tgz

Docker images

ambassador

mirantis.azurecr.io/core/external/nginx:1.36.14

baremetal-dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-alpine-20230406194234

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-focal-20230405200004

baremetal-resource-controller

n/a (merged to bm-collective)

bm-collective New

mirantis.azurecr.io/bm/bm-collective:base-alpine-20230405184901

dynamic_ipxe

n/a (merged to bm-collective)

dnsmasq-controller

n/a (merged to bm-collective)

ironic Updated

mirantis.azurecr.io/openstack/ironic:yoga-focal-20230403060017

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:yoga-focal-20230403060017

ironic-prometheus-exporter Updated

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20230330140456

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-focal-20230405184421

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.12-focal-20230328123811

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.19.0-5-g6a7e17d

metallb-controller

mirantis.azurecr.io/bm/external/metallb/controller:v0.13.7-20221130155702-refresh-2023033102

metallb-speaker

mirantis.azurecr.io/bm/external/metallb/speaker:v0.13.7-20221130155702-refresh-2023033102

syslog-ng Updated

mirantis.azurecr.io/bm/syslog-ng:base-focal-20230316094816

Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.36.14.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.36.14.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.36.14.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.36.14.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.36.14.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.36.14.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.36.14.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.36.14.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.36.14.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.36.14.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.36.14.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.36.14.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.36.14.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.36.14.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.36.14.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.36.14.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.36.14.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.36.14.tgz

machinepool-controller

https://binary.mirantis.com/core/helm/machinepool-controller-1.36.14.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.36.14.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.36.14.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.36.14.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.36.14.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.36.14.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.36.14.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.36.14.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.36.14.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.36.14.tgz

scope-controller

http://binary.mirantis.com/core/helm/scope-controller-1.36.14.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.36.14.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.36.14.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.36.14.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.36.14.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.36.14

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.36.14

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.36.14

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.36.14

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.36.14

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.6.1

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.36.14

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.36.14

frontend Updated

mirantis.azurecr.io/core/frontend:1.36.14

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.36.14

kaas-exporter

mirantis.azurecr.io/core/kaas-exporter:1.36.14

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.36.14

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.36.14

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.36.14

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.36.14

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.17.0-8-g6ca89d5

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.19.0-5-g6a7e17d

metrics-server

mirantis.azurecr.io/core/external/metrics-server:v0.6.3-2

nginx

mirantis.azurecr.io/core/external/nginx:1.36.14

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager-amd64:v1.22.1-7-gc11024f8

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.36.14

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.36.14

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.36.14

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.36.14

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.36.14

registry Updated

mirantis.azurecr.io/lcm/registry:v2.8.1-1-g7bde01d2

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.36.14

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.36.14

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.36.14

squid-proxy Updated

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-10-g24a0d69

storage-discovery Deprecated

mirantis.azurecr.io/core/storage-discovery:1.36.14

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.36.14

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.36.14

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.36.14

IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

Helm charts Updated

iam

http://binary.mirantis.com/iam/helm/iam-2.4.41.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.16.tgz

Docker images

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.0-20200311160233

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.12-focal-20230227122722

keycloak Updated

mirantis.azurecr.io/iam/keycloak:0.5.16

keycloak-gatekeeper Updated

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-4

Security notes

In Container Cloud 2.23.2, 1087 vendor-specific CVEs have been addressed, 53 with critical and 1034 with high severity.

The full list of the CVEs present in the current Container Cloud release is available at the Mirantis Security Portal.

2.23.1

The Mirantis Container Cloud GA release 2.23.1 is based on 2.23.0 and:

  • Introduces support for the Cluster release 12.7.0 that is based on the Cluster release 11.7.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 23.1.

    This Cluster release is based on the updated version of Mirantis Kubernetes Engine 3.5.7 with Kubernetes 1.21 and Mirantis Container Runtime 20.10.13.

  • Supports the latest Cluster release 11.7.0

  • Does not support greenfield deployments based on deprecated Cluster releases 12.5.0 and 11.6.0. Use the latest available Cluster releases of the series instead.

For details about the Container Cloud release 2.23.1, refer to its parent releases 2.23.0 and 2.22.0:

Caution

Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

2.23.0

The Mirantis Container Cloud GA release 2.23.0:

  • Introduces support for the Cluster release 11.7.0 that is based on Mirantis Container Runtime 20.10.13 and Mirantis Kubernetes Engine 3.5.7 with Kubernetes 1.21.

  • Supports the Cluster release 12.5.0 that is based on the Cluster release 11.5.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 22.5.

  • Does not support greenfield deployments on deprecated Cluster releases 11.6.0, 8.10.0, and 7.11.0. Use the latest available Cluster releases of the series instead.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.23.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.23.0. For the list of enhancements in the Cluster release 11.7.0 that is introduced by the Container Cloud release 2.23.0, see the Cluster releases (managed).

Graceful cluster reboot

Implemented the capability to perform a graceful reboot on a management, regional, or managed cluster for all supported providers using the GracefulRebootRequest custom resource. Use this resource for a rolling reboot of several or all cluster machines without workloads interruption. The reboot occurs in the order of cluster upgrade policy.

The resource is also useful for a bulk reboot of machines, for example, on large clusters.

To verify the reboot status of a machine:

kubectl get machines  <machineName> -o wide

Example of system response:

NAME    READY  LCMPHASE  NODENAME            UPGRADEINDEX  REBOOTREQUIRED  WARNINGS
demo-0  true   Ready     kaas-node-c6aa8ad3  1             true

Note

For MOSK-based deployments, the feature support is available since MOSK 23.1.

Readiness fields for ‘Machine’ and ‘Cluster’ objects

Enhanced Machine and Cluster objects by adding the following output columns to the kubectl get machines -o wide and kubectl get cluster -o wide commands to simplify monitoring of machine and cluster states. More specifically, you can now obtain the following machine and cluster details:

  • Machine object:

    • READY

    • UPGRADEINDEX

    • REBOOTREQUIRED

    • WARNINGS

    • LCMPHASE (renamed from PHASE)

  • Cluster object:

    • READY

    • RELEASE

    • WARNINGS

Example system response of the kubectl get machines <machineName> -o wide command:

NAME    READY  LCMPHASE  NODENAME            UPGRADEINDEX  REBOOTREQUIRED  WARNINGS
demo-0  true   Ready     kaas-node-c6aa8ad3  1             true
Deletion of persistent volumes during an OpenStack-based cluster deletion

TechPreview

Implemented the initial Technology Preview API support for deletion of persistent volumes during an OpenStack-based managed cluster deletion. To enable the feature, set the boolean volumesCleanupEnabled option in the spec.providerSpec.value section of the Cluster object before a managed cluster deletion.

Ability to disable time sync management

Implemented the capability to disable time sync management during a management or regional cluster bootstrap using the ntpEnabled=false option. The default setting remains ntpEnabled=true. The feature disables the management of chrony configuration by Container Cloud and enables you to use your own system for chrony management.

Note

For MOSK-based deployments, the feature support is available since MOSK 23.1.

The ‘Upgrade’ button for easy cluster update through the web UI

Implemented a separate Upgrade button in the Container Cloud web UI to simplify the start of a cluster update. This button provides easy access to the cluster update dialog and has the same functionality as the Upgrade cluster option available under the cluster menu.

The Upgrade button is located on the Clusters page next to the More action icon located in the last column for each cluster when a new Cluster release update becomes available.

If the Upgrade button is greyed out, the cluster is in maintenance mode that must be disabled before you can proceed with cluster update. For details, see Enable maintenance mode on a cluster and machine using web UI.

If the Upgrade button does not display, your cluster is up-to-date.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.23.0 along with the Cluster release 11.7.0:

  • [29647] Fixed the issue with the Network prepared stage getting stuck in the NotStarted status during deployment of a vSphere-based management or regional cluster with IPAM disabled.

  • [26896] Fixed the issue with the MetalLB liveness and readiness timeouts in a slow network.

  • [28313] Fixed the issue with the iam-keycloak Pod starting slowly because of DB errors causing timeouts while waiting for the OIDC configuration readiness.

  • [28675] Fixed the issue with the Ceph OSD-related parameters configured using rookConfig in KaaSCephcluster being not applied until OSDs are restarted. Now, parameters for Ceph OSD daemons apply during runtime instead of setting them directly in ceph.conf. Therefore, no restart is required.

  • [30040] Fixed the issue with the HelmBundleReleaseNotDeployed alert that has the release_name=opensearch label firing during the Container Cloud or Cluster release update due to issues with the claim request size in the elasticsearch.persistentVolumeClaimSize configuration.

  • [29329] Fixed the issue with recreation of the Patroni container replica being stuck in the degraded state due to the liveness probe killing the container that runs the pg_rewind procedure during cluster update.

  • [28822] Fixed the issue with Reference Application triggering false-positive alerts related to Reference Application during its upgrade.

  • [28479] Fixed the issue with the restarts count of the metric-collector Pod being increased in time with reason: OOMKilled in containerStatuses of the metric-collector Pod on baremetal-based management clusters with HTTP proxy enabled.

  • [28417] Fixed the issue with the Reports Dashboards plugin not being enabled by default preventing the use of the reporting option. For details about this plugin, see the GitHub OpenSearch documentation: OpenSearch Dashboards Reports.

  • [28373] Fixed the issue with Alerta getting stuck after a failed initialization during cluster creation with StackLight enabled.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.23.0 including the Cluster release 11.7.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


Bare metal
[29762] Wrong IP address is assigned after the MetalLB controller restart

Fixed in 14.0.0(1) and 15.0.1

Due to the upstream MetalLB issue, a race condition occurs when assigning an IP address after the MetalLB controller restart. If a new service of the LoadBalancer type is created during the MetalLB Controller restart, then this service can be assigned an IP address that was already assigned to another service before the MetalLB Controller restart.

To verify that the cluster is affected:

Verify whether IP addresses of the LoadBalancer (LB) type are duplicated where they are not supposed to:

kubectl get svc -A|grep LoadBalancer

Note

Some services use shared IP addresses on purpose. In the example system response below, these are services using the IP address 10.0.1.141.

Example system response:

kaas        dhcp-lb                   LoadBalancer  10.233.4.192   10.0.1.141      53:32594/UDP,67:30048/UDP,68:30464/UDP,69:31898/UDP,123:32450/UDP  13h
kaas        dhcp-lb-tcp               LoadBalancer  10.233.6.79    10.0.1.141      8080:31796/TCP,53:32012/TCP                                        11h
kaas        httpd-http                LoadBalancer  10.233.0.92    10.0.1.141      80:30115/TCP                                                       13h
kaas        iam-keycloak-http         LoadBalancer  10.233.55.2    10.100.91.101   443:30858/TCP,9990:32301/TCP                                       2h
kaas        ironic-kaas-bm            LoadBalancer  10.233.26.176  10.0.1.141      6385:31748/TCP,8089:30604/TCP,5050:32200/TCP,9797:31988/TCP,601:31888/TCP 13h
kaas        ironic-syslog             LoadBalancer  10.233.59.199  10.0.1.141      514:32098/UDP                                                      13h
kaas        kaas-kaas-ui              LoadBalancer  10.233.51.167  10.100.91.101   443:30976/TCP                                                      13h
kaas        mcc-cache                 LoadBalancer  10.233.40.68   10.100.91.102   80:32278/TCP,443:32462/TCP                                         12h
kaas        mcc-cache-pxe             LoadBalancer  10.233.10.75   10.0.1.142      80:30112/TCP,443:31559/TCP                                         12h
stacklight  iam-proxy-alerta          LoadBalancer  10.233.4.102   10.100.91.104   443:30101/TCP                                                      12h
stacklight  iam-proxy-alertmanager    LoadBalancer  10.233.46.45   10.100.91.105   443:30944/TCP                                                      12h
stacklight  iam-proxy-grafana         LoadBalancer  10.233.39.24   10.100.91.106   443:30953/TCP                                                      12h
stacklight  iam-proxy-prometheus      LoadBalancer  10.233.12.174  10.100.91.107   443:31300/TCP                                                      12h
stacklight  telemeter-server-external LoadBalancer  10.233.56.63   10.100.91.103   443:30582/TCP                                                      12h

In the above example, the iam-keycloak-http and kaas-kaas-ui services erroneously use the same IP address 10.100.91.101. They both use the same port 443 producing a collision when an application tries to access the 10.100.91.101:443 endpoint.

Workaround:

  1. Unassign the current LB IP address for the selected service, as no LB IP address can be used for the NodePort service:

    kubectl -n kaas patch svc <serviceName> -p '{"spec":{"type":"NodePort"}}'
    
  2. Assign a new LB IP address for the selected service:

    kubectl -n kaas patch svc <serviceName> -p '{"spec":{"type":"LoadBalancer"}}'
    

    The second affected service will continue using its current LB IP address.

[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.

[20736] Region deletion failure after regional deployment failure

If a baremetal-based regional cluster deployment fails before pivoting is done, the corresponding region deletion fails.

Workaround:

Using the command below, manually delete all possible traces of the failed regional cluster deployment, including but not limited to the following objects that contain the kaas.mirantis.com/region label of the affected region:

  • cluster

  • machine

  • baremetalhost

  • baremetalhostprofile

  • l2template

  • subnet

  • ipamhost

  • ipaddr

kubectl delete <objectName> -l kaas.mirantis.com/region=<regionName>

Warning

Do not use the same region name again after the regional cluster deployment failure since some objects that reference the region name may still exist.



LCM
[5981] Upgrade gets stuck on the cluster with more that 120 nodes

Fixed in 14.0.0(1) and 15.0.1

Upgrade of a cluster with more than 120 nodes gets stuck with errors about IP addresses exhaustion in the docker logs.

Note

If you plan to scale your cluster to more than 120 nodes, the cluster will be affected by the issue. Therefore, you will have to perform the workaround below.

Workaround:

Caution

If you have not run the cluster upgrade yet, simply recreate the mke-overlay network as described in the step 6 and skip all other steps.

Note

If you successfully upgraded the cluster with less than 120 nodes but plan to scale it to more that 120 node, proceed with steps 2-9.

  1. Verify that MKE nodes are upgraded:

    1. On any master node, run the following command to identify ucp-worker-agent that has a newer version:

      docker service ls
      

      Example of system response:

      ID             NAME                     MODE         REPLICAS   IMAGE                          PORTS
      7jdl9m0giuso   ucp-3-5-7                global       0/0        mirantis/ucp:3.5.7
      uloi2ixrd0br   ucp-auth-api             global       3/3        mirantis/ucp-auth:3.5.7
      pfub4xa17nkb   ucp-auth-worker          global       3/3        mirantis/ucp-auth:3.5.7
      00w1kqn0x69w   ucp-cluster-agent        replicated   1/1        mirantis/ucp-agent:3.5.7
      xjhwv1vrw9k5   ucp-kube-proxy-win       global       0/0        mirantis/ucp-agent-win:3.5.7
      oz28q8a7swmo   ucp-kubelet-win          global       0/0        mirantis/ucp-agent-win:3.5.7
      ssjwonmnvk3s   ucp-manager-agent        global       3/3        mirantis/ucp-agent:3.5.7
      ks0ttzydkxmh   ucp-pod-cleaner-win      global       0/0        mirantis/ucp-agent-win:3.5.7
      w5d25qgneibv   ucp-tigera-felix-win     global       0/0        mirantis/ucp-agent-win:3.5.7
      ni86z33o10n3   ucp-tigera-node-win      global       0/0        mirantis/ucp-agent-win:3.5.7
      iyyh1f0z6ejc   ucp-worker-agent-win-x   global       0/0        mirantis/ucp-agent-win:3.5.5
      5z6ew4fmf2mm   ucp-worker-agent-win-y   global       0/0        mirantis/ucp-agent-win:3.5.7
      gr52h05hcwwn   ucp-worker-agent-x       global       56/56      mirantis/ucp-agent:3.5.5
      e8coi9bx2j7j   ucp-worker-agent-y       global       121/121    mirantis/ucp-agent:3.5.7
      

      In the above example, it is ucp-worker-agent-y.

    2. Obtain the node list:

      docker service ps ucp-worker-agent-y | awk -F ' ' $4 ~ /^kaas/ {print $4} > upgraded_nodes.txt
      
  2. Identify the cluster ID. For example, run the following command on the management cluster:

    kubectl -n <clusterNamespace> get cluster <clusterName> -o json | jq '.status.providerStatus.mke.clusterID'
    
  3. Create a backup of MKE as described in the MKE documentation: Backup procedure.

  4. Remove MKE services:

    docker service rm ucp-cluster-agent ucp-manager-agent ucp-worker-agent-win-y ucp-worker-agent-y ucp-worker-agent-win-x ucp-worker-agent-x
    
  5. Remove the mke-overlay network:

    docker network rm mke-overlay
    
  6. Recreate the mke-overlay network with a correct CIDR that must be at least /20 and have no interventions with other subnets in the cluster network. For example:

    docker network create -d overlay --subnet 10.1.0.0/20 mke-overlay
    
  7. Create placeholder worker services:

    docker service create --name ucp-worker-agent-x --mode global --constraint node.labels.foo==bar --detach busybox sleep 3d
    
    docker service create --name ucp-worker-agent-win-x --mode global --constraint node.labels.foo==bar --detach busybox sleep 3d
    
  8. Recreate all MKE services using the previously obtained cluster ID. Use the target version for your cluster, for example, 3.5.7:

    docker container run --rm -it --name ucp -v /var/run/docker.sock:/var/run/docker.sock mirantis/ucp:3.5.7 upgrade --debug --manual-worker-upgrade --force-minimums --id <cluster ID> --interactive --force-port-check
    

    Note

    Because of interactive mode, you may need to use Ctrl+C when the command execution completes.

  9. Verify that all services are recreated:

    docker service ls
    

    The exemplary ucp-worker-agent-y service must have 1 replica running with a node that was previously stuck.

  10. Using the node list obtained in the first step, remove the upgrade-hold labels from the nodes that were previously upgraded:

    for i in $(cat upgraded_nodes.txt); do docker node update --label-rm com.docker.ucp.upgrade-hold $i; done
    
  11. Verify that all nodes from the list obtained in the first step are present in the ucp-worker-agent-y service. For example:

    docker service ps ucp-worker-agent-y
    
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>
[30294] Replacement of a master node is stuck on the calico-node Pod start

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a master node on a cluster of any type, the calico-node Pod fails to start on a new node that has the same IP address as the node being replaced.

Workaround:

  1. Log in to any master node.

  2. From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the mirantis/ucp-dsinfo image:

    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/key.pem \
    -e ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/ca.pem \
    -e ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-kv-certs/_data/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v /var/lib/docker/volumes/ucp-kv-certs/_data:/var/lib/docker/volumes/ucp-kv-certs/_data:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl \
    "
    
    alias calicoctl="\
    docker run -i --rm \
    --pid host \
    --net host \
    -e constraint:ostype==linux \
    -e ETCD_ENDPOINTS=<etcdEndpoint> \
    -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \
    -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \
    -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \
    -v /var/run/calico:/var/run/calico \
    -v ucp-node-certs:/ucp-node-certs:ro \
    mirantis/ucp-dsinfo:<mkeVersion> \
    calicoctl --allow-version-mismatch \
    "
    

    In the above command, replace the following values with the corresponding settings of the affected cluster:

    • <etcdEndpoint> is the etcd endpoint defined in the Calico configuration file. For example, ETCD_ENDPOINTS=127.0.0.1:12378

    • <mkeVersion> is the MKE version installed on your cluster. For example, mirantis/ucp-dsinfo:3.5.7.

  3. Verify the node list on the cluster:

    kubectl get node
    
  4. Compare this list with the node list in Calico to identify the old node:

    calicoctl get node -o wide
    
  5. Remove the old node from Calico:

    calicoctl delete node kaas-node-<nodeID>
    
[27797] A cluster ‘kubeconfig’ stops working during MKE minor version update

During update of a Container Cloud cluster of any type, if the MKE minor version is updated from 3.4.x to 3.5.x, access to the cluster using the existing kubeconfig fails with the You must be logged in to the server (Unauthorized) error due to OIDC settings being reconfigured.

As a workaround, during the cluster update process, use the admin kubeconfig instead of the existing one. Once the update completes, you can use the existing cluster kubeconfig again.

To obtain the admin kubeconfig:

kubectl --kubeconfig <pathToMgmtKubeconfig> get secret -n <affectedClusterNamespace> \
-o yaml <affectedClusterName>-kubeconfig | awk '/admin.conf/ {print $2}' | \
head -1 | base64 -d > clusterKubeconfig.yaml

If the related cluster is regional, replace <pathToMgmtKubeconfig> with <pathToRegionalKubeconfig>.


TLS configuration
[29604] The ‘failed to get kubeconfig’ error during TLS configuration

Fixed in 14.0.0(1) and 15.0.1

When setting a new Transport Layer Security (TLS) certificate for a cluster, the false positive failed to get kubeconfig error may occur on the Waiting for TLS settings to be applied stage. No actions are required. Therefore, disregard the error.

To verify the status of the TLS configuration being applied:

kubectl get cluster <ClusterName> -n <ClusterProjectName> -o jsonpath-as-json="{.status.providerStatus.tls.<Application>}"

Possible values for the <Application> parameter are as follows:

  • keycloak

  • ui

  • cache

  • mke

  • iamProxyAlerta

  • iamProxyAlertManager

  • iamProxyGrafana

  • iamProxyKibana

  • iamProxyPrometheus

Example of system response:

[
    {
        "expirationTime": "2024-01-06T09:37:04Z",
        "hostname": "domain.com",
    }
]

In this example, expirationTime equals the NotAfter field of the server certificate. And the value of hostname contains the configured application name.


Ceph
[30857] Irrelevant error during Ceph OSD deployment on removable devices

Fixed in 14.0.0(1) and 15.0.1

The deployment of Ceph OSDs fails with the following messages in the status section of the KaaSCephCluster custom resource:

shortClusterInfo:
  messages:
  - Not all osds are deployed
  - Not all osds are in
  - Not all osds are up

To find out if your cluster is affected, verify if the devices on the AMD hosts you use for the Ceph OSDs deployment are removable. For example, if the sdb device name is specified in spec.cephClusterSpec.nodes.storageDevices of the KaaSCephCluster custom resource for the affected host, run:

# cat /sys/block/sdb/removable
1

The system output shows that the reason of the above messages in status is the enabled hotplug functionality on the AMD nodes, which marks all drives as removable. And the hotplug functionality is not supported by Ceph in Container Cloud.

As a workaround, disable the hotplug functionality in the BIOS settings for disks that are configured to be used as Ceph OSD data devices.

[30635] Ceph ‘pg_autoscaler’ is stuck with the ‘overlapping roots’ error

Fixed in 14.0.0(1) and 15.0.1

Due to the upstream Ceph issue occurring since Ceph Pacific, the pg_autoscaler module of Ceph Manager fails with the pool <poolNumber> has overlapping roots error if a Ceph cluster contains a mix of pools with deviceClass either explicitly specified or not specified.

The deviceClass parameter is required for a pool definition in the spec section of the KaaSCephCluster object, but not required for Ceph RADOS Gateway (RGW) and Ceph File System (CephFS). Therefore, if sections for Ceph RGW or CephFS data or metadata pools are defined without deviceClass, then autoscaling of placement groups is disabled on a cluster due to overlapping roots. Overlapping roots imply that Ceph RGW and/or CephFS pools obtained the default crush rule and have no demarcation on a specific class to store data.

Note

If pools for Ceph RGW and CephFS already have deviceClass specified, skip the corresponding steps of the below procedure.

Note

Perform the below procedure on the affected managed cluster using its kubeconfig.

Workaround:

  1. Obtain failureDomain and required replicas for Ceph RGW and/or CephFS pools:

    Note

    If the KaasCephCluster spec section does not contain failureDomain, failureDomain equals host by default to store one replica per node.

    Note

    The types of pools crush rules include:

    • An erasureCoded pool requires the codingChunks + dataChunks number of available units of failureDomain.

    • A replicated pool requires the replicated.size number of available units of failureDomain.

    • To obtain Ceph RGW pools, use the spec.cephClusterSpec.objectStorage.rgw section of the KaaSCephCluster object. For example:

      objectStorage:
        rgw:
          dataPool:
            failureDomain: host
            erasureCoded:
              codingChunks: 1
              dataChunks: 2
          metadataPool:
            failureDomain: host
            replicated:
              size: 3
          gateway:
            allNodes: false
            instances: 3
            port: 80
            securePort: 8443
          name: openstack-store
          preservePoolsOnDelete: false
      

      The dataPool pool requires the sum of codingChunks and dataChunks values representing the number of available units of failureDomain. In the example above, for failureDomain: host, dataPool requires 3 available nodes to store its objects.

      The metadataPool pool requires the replicated.size number of available units of failureDomain. For failureDomain: host, metadataPool requires 3 available nodes to store its objects.

    • To obtain CephFS pools, use the spec.cephClusterSpec.sharedFilesystem.cephFS section of the KaaSCephCluster object. For example:

      sharedFilesystem:
        cephFS:
        - name: cephfs-store
          dataPools:
          - name: default-pool
            replicated:
              size: 3
            failureDomain: host
          - name: second-pool
            erasureCoded:
              dataChunks: 2
              codingChunks: 1
          metadataPool:
            replicated:
              size: 3
            failureDomain: host
          ...
      

      The default-pool and metadataPool pools require the replicated.size number of available units of failureDomain. For failureDomain: host, default-pool requires 3 available nodes to store its objects.

      The second-pool pool requires the sum of codingChunks and dataChunks representing the number of available units of failureDomain. For failureDomain: host, second-pool requires 3 available nodes to store its objects.

  2. Obtain the device class that meets the desired number of required replicas for the defined failureDomain.

    Obtaining of the device class
    1. Get a shell of the ceph-tools Pod:

      kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
      
    2. Obtain the Ceph crush tree with all available crush rules of the device class:

      ceph osd tree
      

      Example output:

      ID  CLASS  WEIGHT   TYPE NAME                                                STATUS  REWEIGHT  PRI-AFF
      -1         0.18713  root default
      -3         0.06238      host kaas-node-a29ecf2d-a2cc-493e-bd83-00e9639a7db8
       0    hdd  0.03119          osd.0                                                up   1.00000  1.00000
       3    ssd  0.03119          osd.3                                                up   1.00000  1.00000
      -5         0.06238      host kaas-node-dd6826b0-fe3f-407c-ae29-6b0e4a40019d
       1    hdd  0.03119          osd.1                                                up   1.00000  1.00000
       4    ssd  0.03119          osd.4                                                up   1.00000  1.00000
      -7         0.06238      host kaas-node-df65fa30-d657-477e-bad2-16f69596d37a
       2    hdd  0.03119          osd.2                                                up   1.00000  1.00000
       5    ssd  0.03119          osd.5                                                up   1.00000  1.00000
      
    3. Calculate the number of the failureDomain units with each device class.

      For failureDomain: host, hdd and ssd device classes from the example output above have 3 units each.

    4. Select the device classes that meet the replicas requirement. In the example output above, both hdd and ssd are applicable to store the pool data.

    5. Exit the ceph-tools Pod.

  3. Calculate potential data size for Ceph RGW and CephFS pools.

    Calculation of data size
    1. Obtain Ceph data stored by classes and pools:

      kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph df
      

      Example output:

      --- RAW STORAGE ---
      CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
      hdd     96 GiB   90 GiB  6.0 GiB   6.0 GiB       6.26
      ssd     96 GiB   96 GiB  211 MiB   211 MiB       0.21
      TOTAL  192 GiB  186 GiB  6.2 GiB   6.2 GiB       3.24
      
      --- POOLS ---
      POOL                                ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
      device_health_metrics                1    1      0 B        0      0 B      0     42 GiB
      kubernetes-hdd                       2   32  2.3 GiB      707  4.6 GiB   5.15     42 GiB
      kubernetes-2-ssd                    11   32     19 B        1    8 KiB      0     45 GiB
      openstack-store.rgw.meta            12   32  2.5 KiB       10   64 KiB      0     45 GiB
      openstack-store.rgw.log             13   32   23 KiB      309  1.3 MiB      0     45 GiB
      .rgw.root                           14   32  4.8 KiB       16  120 KiB      0     45 GiB
      openstack-store.rgw.otp             15   32      0 B        0      0 B      0     45 GiB
      openstack-store.rgw.control         16   32      0 B        8      0 B      0     45 GiB
      openstack-store.rgw.buckets.index   17   32  2.7 KiB       22  5.3 KiB      0     45 GiB
      openstack-store.rgw.buckets.non-ec  18   32      0 B        0      0 B      0     45 GiB
      openstack-store.rgw.buckets.data    19   32  103 MiB       26  155 MiB   0.17     61 GiB
      
    2. Summarize the USED size of all <rgwName>.rgw.* pools and compare it with the AVAIL size of each applicable device class selected in the previous step.

      Note

      As Ceph RGW pools lack explicit specification of deviceClass, they may store objects on all device classes. The resulted device size can be smaller than the calculated USED size because part of data can already be stored in the desired class. Therefore, limiting pools to a single device class may result in a smaller occupied data size than the total USED size. Nonetheless, calculating the USED size of all pools remains valid because the pool data may not be stored on the selected device class.

    3. For CephFS data or metadata pools, use the previous step to calculate the USED size of pools and compare it with the AVAIL size.

    4. Decide which device class from applicable by required replicas and available size is more preferable to store Ceph RGW and CephFS data. In the example output above, hdd and ssd are both applicable. Therefore, select any of them.

      Note

      You can select different device classes for Ceph RGW and CephFS. For example, hdd for Ceph RGW and ssd for CephFS. Select a device class based on performance expectations, if any.

  4. Create the rule-helper script to switch Ceph RGW or CephFS pools to a device usage.

    Creation of the rule-helper script
    1. Create the rule-helper script file:

      1. Get a shell of the ceph-tools Pod:

        kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
        
      2. Create the /tmp/rule-helper.py file with the following content:

        cat > /tmp/rule-helper.py << EOF
        import argparse
        import json
        import subprocess
        from sys import argv, exit
        
        
        def get_cmd(cmd_args):
            output_args = ['--format', 'json']
            _cmd = subprocess.Popen(cmd_args + output_args,
                                    stdout=subprocess.PIPE,
                                    stderr=subprocess.PIPE)
            stdout, stderr = _cmd.communicate()
            if stderr:
                error = stderr
                print("[ERROR] Failed to get '{0}': {1}".format(cmd_args.join(' '), stderr))
                return
            return stdout
        
        
        def format_step(action, cmd_args):
            return "{0}:\n\t{1}".format(action, ' '.join(cmd_args))
        
        
        def process_rule(rule):
            steps = []
            new_rule_name = rule['rule_name'] + '_v2'
            if rule['type'] == "replicated":
                rule_create_args = ['ceph', 'osd', 'crush', 'create-replicated',
                    new_rule_name, rule['root'], rule['failure_domain'], rule['device_class']]
                steps.append(format_step("create a new replicated rule for pool", rule_create_args))
            else:
                new_profile_name = rule['profile_name'] + '_' + rule['device_class']
                profile_create_args = ['ceph', 'osd', 'erasure-code-profile', 'set', new_profile_name]
                for k,v in rule['profile'].items():
                    profile_create_args.append("{0}={1}".format(k,v))
                rule_create_args = ['ceph', 'osd', 'crush', 'create-erasure', new_rule_name, new_profile_name]
                steps.append(format_step("create a new erasure-coded profile", profile_create_args))
                steps.append(format_step("create a new erasure-coded rule for pool", rule_create_args))
        
            set_rule_args = ['ceph', 'osd', 'pool', 'set', 'crush_rule', rule['pool_name'], new_rule_name]
            revert_rule_args = ['ceph', 'osd', 'pool', 'set', 'crush_rule', new_rule_name, rule['pool_name']]
            rm_old_rule_args = ['ceph', 'osd', 'crush', 'rule', 'rm', rule['rule_name']]
            rename_rule_args = ['ceph', 'osd', 'crush', 'rule', 'rename', new_rule_name, rule['rule_name']]
            steps.append(format_step("set pool crush rule to new one", set_rule_args))
            steps.append("check that replication is finished and status healthy: ceph -s")
            steps.append(format_step("in case of any problems revert step 2 and stop procedure", revert_rule_args))
            steps.append(format_step("remove standard (old) pool crush rule", rm_old_rule_args))
            steps.append(format_step("rename new pool crush rule to standard name", rename_rule_args))
            if rule['type'] != "replicated":
                rm_old_profile_args = ['ceph', 'osd', 'erasure-code-profile', 'rm', rule['profile_name']]
                steps.append(format_step("remove standard (old) erasure-coded profile", rm_old_profile_args))
        
            for idx, step in enumerate(steps):
                print("  {0}) {1}".format(idx+1, step))
        
        
        def check_rules(args):
            extra_pools_lookup = []
            if args.type == "rgw":
                extra_pools_lookup.append(".rgw.root")
            pools_str = get_cmd(['ceph', 'osd', 'pool', 'ls', 'detail'])
            if pools_str == '':
                return
            rules_str = get_cmd(['ceph', 'osd', 'crush', 'rule', 'dump'])
            if rules_str == '':
                return
            try:
                pools_dump = json.loads(pools_str)
                rules_dump = json.loads(rules_str)
                if len(pools_dump) == 0:
                    print("[ERROR] No pools found")
                    return
                if len(rules_dump) == 0:
                    print("[ERROR] No crush rules found")
                    return
                crush_rules_recreate = []
                for pool in pools_dump:
                    if pool['pool_name'].startswith(args.prefix) or pool['pool_name'] in extra_pools_lookup:
                        rule_id = pool['crush_rule']
                        for rule in rules_dump:
                            if rule['rule_id'] == rule_id:
                                recreate = False
                                new_rule = {'rule_name': rule['rule_name'], 'pool_name': pool['pool_name']}
                                for step in rule.get('steps',[]):
                                    root = step.get('item_name', '').split('~')
                                    if root[0] != '' and len(root) == 1:
                                        new_rule['root'] = root[0]
                                        continue
                                    failure_domain = step.get('type', '')
                                    if failure_domain != '':
                                        new_rule['failure_domain'] = failure_domain
                                if new_rule.get('root', '') == '':
                                    continue
                                new_rule['device_class'] = args.device_class
                                if pool['erasure_code_profile'] == "":
                                    new_rule['type'] = "replicated"
                                else:
                                    new_rule['type'] = "erasure"
                                    profile_str = get_cmd(['ceph', 'osd', 'erasure-code-profile', 'get', pool['erasure_code_profile']])
                                    if profile_str == '':
                                        return
                                    profile_dump = json.loads(profile_str)
                                    profile_dump['crush-device-class'] = args.device_class
                                    new_rule['profile_name'] = pool['erasure_code_profile']
                                    new_rule['profile'] = profile_dump
                                crush_rules_recreate.append(new_rule)
                                break
                print("Found {0} pools with crush rules require device class set".format(len(crush_rules_recreate)))
                for new_rule in crush_rules_recreate:
                    print("- Pool {0} requires crush rule update, device class is not set".format(new_rule['pool_name']))
                    process_rule(new_rule)
            except Exception as err:
                print("[ERROR] Failed to get info from Ceph: {0}".format(err))
                return
        
        
        if __name__ == '__main__':
            parser = argparse.ArgumentParser(
                description='Ceph crush rules checker. Specify device class and service name.',
                prog=argv[0], usage='%(prog)s [options]')
            parser.add_argument('--type', type=str,
                                help='Type of pool: rgw, cephfs',
                                default='',
                                required=True)
            parser.add_argument('--prefix', type=str,
                                help='Pool prefix. If objectstore - use objectstore name, if CephFS - CephFS name.',
                                default='',
                                required=True)
            parser.add_argument('--device-class', type=str,
                                help='Device class to switch on.',
                                required=True)
            args = parser.parse_args()
            if len(argv) < 3:
                parser.print_help()
                exit(0)
        
            check_rules(args)
        EOF
        
      3. Exit the ceph-tools Pod.

  5. For Ceph RGW, execute the rule-helper script to output the step-by-step instruction and run each step provided in the output manually.

    Note

    The following steps include creation of crush rules with the same parameters as before but with the device class specification and switching of pools to new crush rules.

    Execution of the rule-helper script steps for Ceph RGW
    1. Get a shell of the ceph-tools Pod:

      kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
      
    2. Run the /tmp/rule-helper.py script with the following parameters:

      python3 /tmp/rule-helper.py --prefix <rgwName> --type rgw --device-class <deviceClass>
      

      Substitute the following parameters:

      • <rgwName> with the Ceph RGW name from spec.cephClusterSpec.objectStorage.rgw.name in the KaaSCephCluster object. In the example above, the name is openstack-store.

      • <deviceClass> with the device class selected in the previous steps.

    3. Using the output of the command from the previous step, run manual commands step-by-step.

      Example output for the hdd device class:

      Found 7 pools with crush rules require device class set
      - Pool openstack-store.rgw.control requires crush rule update, device class is not set
        1) create a new replicated rule for pool:
          ceph osd crush create-replicated openstack-store.rgw.control_v2 default host hdd
        2) set pool crush rule to new one:
          ceph osd pool set crush_rule openstack-store.rgw.control openstack-store.rgw.control_v2
        3) check that replication is finished and status healthy: ceph -s
        4) in case of any problems revert step 2 and stop procedure:
          ceph osd pool set crush_rule openstack-store.rgw.control_v2 openstack-store.rgw.control
        5) remove standard (old) pool crush rule:
          ceph osd crush rule rm openstack-store.rgw.control
        6) rename new pool crush rule to standard name:
          ceph osd crush rule rename openstack-store.rgw.control_v2 openstack-store.rgw.control
      - Pool openstack-store.rgw.log requires crush rule update, device class is not set
        1) create a new replicated rule for pool:
          ceph osd crush create-replicated openstack-store.rgw.log_v2 default host hdd
        2) set pool crush rule to new one:
          ceph osd pool set crush_rule openstack-store.rgw.log openstack-store.rgw.log_v2
        3) check that replication is finished and status healthy: ceph -s
        4) in case of any problems revert step 2 and stop procedure:
          ceph osd pool set crush_rule openstack-store.rgw.log_v2 openstack-store.rgw.log
        5) remove standard (old) pool crush rule:
          ceph osd crush rule rm openstack-store.rgw.log
        6) rename new pool crush rule to standard name:
          ceph osd crush rule rename openstack-store.rgw.log_v2 openstack-store.rgw.log
      - Pool openstack-store.rgw.buckets.non-ec requires crush rule update, device class is not set
        1) create a new replicated rule for pool:
          ceph osd crush create-replicated openstack-store.rgw.buckets.non-ec_v2 default host hdd
        2) set pool crush rule to new one:
          ceph osd pool set crush_rule openstack-store.rgw.buckets.non-ec openstack-store.rgw.buckets.non-ec_v2
        3) check that replication is finished and status healthy: ceph -s
        4) in case of any problems revert step 2 and stop procedure:
          ceph osd pool set crush_rule openstack-store.rgw.buckets.non-ec_v2 openstack-store.rgw.buckets.non-ec
        5) remove standard (old) pool crush rule:
          ceph osd crush rule rm openstack-store.rgw.buckets.non-ec
        6) rename new pool crush rule to standard name:
          ceph osd crush rule rename openstack-store.rgw.buckets.non-ec_v2 openstack-store.rgw.buckets.non-ec
      - Pool .rgw.root requires crush rule update, device class is not set
        1) create a new replicated rule for pool:
          ceph osd crush create-replicated .rgw.root_v2 default host hdd
        2) set pool crush rule to new one:
          ceph osd pool set crush_rule .rgw.root .rgw.root_v2
        3) check that replication is finished and status healthy: ceph -s
        4) in case of any problems revert step 2 and stop procedure:
          ceph osd pool set crush_rule .rgw.root_v2 .rgw.root
        5) remove standard (old) pool crush rule:
          ceph osd crush rule rm .rgw.root
        6) rename new pool crush rule to standard name:
          ceph osd crush rule rename .rgw.root_v2 .rgw.root
      - Pool openstack-store.rgw.meta requires crush rule update, device class is not set
        1) create a new replicated rule for pool:
          ceph osd crush create-replicated openstack-store.rgw.meta_v2 default host hdd
        2) set pool crush rule to new one:
          ceph osd pool set crush_rule openstack-store.rgw.meta openstack-store.rgw.meta_v2
        3) check that replication is finished and status healthy: ceph -s
        4) in case of any problems revert step 2 and stop procedure:
          ceph osd pool set crush_rule openstack-store.rgw.meta_v2 openstack-store.rgw.meta
        5) remove standard (old) pool crush rule:
          ceph osd crush rule rm openstack-store.rgw.meta
        6) rename new pool crush rule to standard name:
          ceph osd crush rule rename openstack-store.rgw.meta_v2 openstack-store.rgw.meta
      - Pool openstack-store.rgw.buckets.index requires crush rule update, device class is not set
        1) create a new replicated rule for pool:
          ceph osd crush create-replicated openstack-store.rgw.buckets.index_v2 default host hdd
        2) set pool crush rule to new one:
          ceph osd pool set crush_rule openstack-store.rgw.buckets.index openstack-store.rgw.buckets.index_v2
        3) check that replication is finished and status healthy: ceph -s
        4) in case of any problems revert step 2 and stop procedure:
          ceph osd pool set crush_rule openstack-store.rgw.buckets.index_v2 openstack-store.rgw.buckets.index
        5) remove standard (old) pool crush rule:
          ceph osd crush rule rm openstack-store.rgw.buckets.index
        6) rename new pool crush rule to standard name:
          ceph osd crush rule rename openstack-store.rgw.buckets.index_v2 openstack-store.rgw.buckets.index
      - Pool openstack-store.rgw.buckets.data requires crush rule update, device class is not set
        1) create a new erasure-coded profile:
          ceph osd erasure-code-profile set openstack-store_ecprofile_hdd crush-device-class=hdd crush-failure-domain=host crush-root=default jerasure-per-chunk-alignment=false k=2 m=1 plugin=jerasure technique=reed_sol_van w=8
        2) create a new erasure-coded rule for pool:
          ceph osd crush create-erasure openstack-store.rgw.buckets.data_v2 openstack-store_ecprofile_hdd
        3) set pool crush rule to new one:
          ceph osd pool set crush_rule openstack-store.rgw.buckets.data openstack-store.rgw.buckets.data_v2
        4) check that replication is finished and status healthy: ceph -s
        5) in case of any problems revert step 2 and stop procedure:
          ceph osd pool set crush_rule openstack-store.rgw.buckets.data_v2 openstack-store.rgw.buckets.data
        6) remove standard (old) pool crush rule:
          ceph osd crush rule rm openstack-store.rgw.buckets.data
        7) rename new pool crush rule to standard name:
          ceph osd crush rule rename openstack-store.rgw.buckets.data_v2 openstack-store.rgw.buckets.data
        8) remove standard (old) erasure-coded profile:
          ceph osd erasure-code-profile rm openstack-store_ecprofile
      
    4. Verify that the Ceph cluster has rebalanced and has the HEALTH_OK status:

      ceph -s
      
    5. Exit the ceph-tools Pod.

  6. For CephFS, execute the rule-helper script to output the step-by-step instruction and run each step provided in the output manually.

    Execution of the rule-helper script steps for CephFS
    1. Get a shell of the ceph-tools Pod:

      kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
      
    2. Run the /tmp/rule-helper.py script with the following parameters:

      python3 /tmp/rule-helper.py --prefix <cephfsName> --type cephfs --device-class <deviceClass>
      

      Substitute the following parameters:

      • <cephfsName> with CephFS name from spec.cephClusterSpec.sharedFilesystem.cephFS[0].name in the KaaSCephCluster object. In the example above, the name is cephfs-store.

      • <deviceClass> with the device class selected in the previous steps.

    3. Using the output of the command from the previous step, run manual commands step-by-step.

      Example output for the hdd device class:

      Found 3 rules require device class set
      - Pool cephfs-store-metadata requires crush rule update, device class is not set
        1) create a new replicated rule for pool:
              ceph osd crush create-replicated cephfs-store-metadata_v2 default host ssd
        2) set pool crush rule to new one:
              ceph osd pool set crush_rule cephfs-store-metadata cephfs-store-metadata_v2
        3) check that replication is finished and status healthy: ceph -s
        4) in case of any problems revert step 2 and stop procedure:
              ceph osd pool set crush_rule cephfs-store-metadata_v2 cephfs-store-metadata
        5) remove standard (old) pool crush rule:
              ceph osd crush rule rm cephfs-store-metadata
        6) rename new pool crush rule to standard name:
              ceph osd crush rule rename cephfs-store-metadata_v2 cephfs-store-metadata
      - Pool cephfs-store-default-pool requires crush rule update, device class is not set
        1) create a new replicated rule for pool:
              ceph osd crush create-replicated cephfs-store-default-pool_v2 default host ssd
        2) set pool crush rule to new one:
              ceph osd pool set crush_rule cephfs-store-default-pool cephfs-store-default-pool_v2
        3) check that replication is finished and status healthy: ceph -s
        4) in case of any problems revert step 2 and stop procedure:
              ceph osd pool set crush_rule cephfs-store-default-pool_v2 cephfs-store-default-pool
        5) remove standard (old) pool crush rule:
              ceph osd crush rule rm cephfs-store-default-pool
        6) rename new pool crush rule to standard name:
              ceph osd crush rule rename cephfs-store-default-pool_v2 cephfs-store-default-pool
      - Pool cephfs-store-second-pool requires crush rule update, device class is not set
        1) create a new erasure-coded profile:
              ceph osd erasure-code-profile set cephfs-store-second-pool_ecprofile_ssd crush-device-class=ssd crush-failure-domain=host crush-root=default jerasure-per-chunk-alignment=false k=2 m=1 plugin=jerasure technique=reed_sol_van w=8
        2) create a new erasure-coded rule for pool:
              ceph osd crush create-erasure cephfs-store-second-pool_v2 cephfs-store-second-pool_ecprofile_ssd
        3) set pool crush rule to new one:
              ceph osd pool set crush_rule cephfs-store-second-pool cephfs-store-second-pool_v2
        4) check that replication is finished and status healthy: ceph -s
        5) in case of any problems revert step 2 and stop procedure:
              ceph osd pool set crush_rule cephfs-store-second-pool_v2 cephfs-store-second-pool
        6) remove standard (old) pool crush rule:
              ceph osd crush rule rm cephfs-store-second-pool
        7) rename new pool crush rule to standard name:
              ceph osd crush rule rename cephfs-store-second-pool_v2 cephfs-store-second-pool
        8) remove standard (old) erasure-coded profile:
              ceph osd erasure-code-profile rm cephfs-store-second-pool_ecprofile
      
    4. Verify that the Ceph cluster has rebalanced and has the HEALTH_OK status:

      ceph -s
      
    5. Exit the ceph-tools Pod.

  7. Verify the pg_autoscaler module after switching deviceClass for all required pools:

    ceph osd pool autoscale-status
    

    The system response must contain all Ceph RGW and CephFS pools.

  8. On the management cluster, edit the KaaSCephCluster object of the corresponding managed cluster by adding the selected device class to the deviceClass parameter of the updated Ceph RGW and CephFS pools:

    kubectl -n <managedClusterProjectName> edit kaascephcluster
    
    Example configuration
    spec:
      cephClusterSpec:
        objectStorage:
          rgw:
            dataPool:
              failureDomain: host
              deviceClass: <rgwDeviceClass>
              erasureCoded:
                codingChunks: 1
                dataChunks: 2
            metadataPool:
              failureDomain: host
              deviceClass: <rgwDeviceClass>
              replicated:
                size: 3
            gateway:
              allNodes: false
              instances: 3
              port: 80
              securePort: 8443
            name: openstack-store
            preservePoolsOnDelete: false
        ...
        sharedFilesystem:
          cephFS:
          - name: cephfs-store
            dataPools:
            - name: default-pool
              deviceClass: <cephfsDeviceClass>
              replicated:
                size: 3
              failureDomain: host
            - name: second-pool
              deviceClass: <cephfsDeviceClass>
              erasureCoded:
                dataChunks: 2
                codingChunks: 1
            metadataPool:
              deviceClass: <cephfsDeviceClass>
              replicated:
                size: 3
              failureDomain: host
            ...
    

    Substitute <rgwDeviceClass> with the device class applied to Ceph RGW pools and <cephfsDeviceClass> with the device class applied to CephFS pools.

    You can use this configuration step for further management of Ceph RGW and/or CephFS. It does not impact the existing Ceph cluster configuration.

[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.


StackLight
[31485] Elasticsearch Curator does not delete indices as per retention period

Fixed in 14.0.0(1) and 15.0.1

Note

If you obtain patch releases, the issue is addressed in 2.23.2 for management and regional clusters and in 11.7.1 and 12.7.1 for managed clusters.

Elasticsearch Curator does not delete any indices according to the configured retention period on any type of Container Cloud clusters.

To verify whether your cluster is affected:

Identify versions of Cluster releases installed on your clusters:

kubectl get cluster --all-namespaces \
-o custom-columns=CLUSTER:.metadata.name,NAMESPACE:.metadata.namespace,VERSION:.spec.providerSpec.value.release

The following list contains all affected Cluster releases:

mke-11-7-0-3-5-7
mke-13-4-4
mke-13-5-3
mke-13-6-0
mke-13-7-0
mosk-12-7-0-23-1

As a workaround, on the affected clusters, create a temporary CronJob for elasticsearch-curator to clean the required indices:

kubectl get cronjob elasticsearch-curator -n stacklight -o json \
| sed 's/5.7.6-[0-9]*/5.7.6-20230404082402/g' \
| jq '.spec.schedule = "30 * * * *"' \
| jq '.metadata.name = "temporary-elasticsearch-curator"' \
| jq 'del(.metadata.resourceVersion,.metadata.uid,.metadata.selfLink,.metadata.creationTimestamp,.metadata.annotations,.metadata.generation,.metadata.ownerReferences,.metadata.labels,.spec.jobTemplate.metadata.labels,.spec.jobTemplate.spec.template.metadata.creationTimestamp,.spec.jobTemplate.spec.template.metadata.labels)' \
| jq '.metadata.labels.app = "temporary-elasticsearch-curator"' \
| jq '.spec.jobTemplate.metadata.labels.app = "temporary-elasticsearch-curator"' \
| jq '.spec.jobTemplate.spec.template.metadata.labels.app = "temporary-elasticsearch-curator"' \
| kubectl create -f -

Note

This CronJob is removed automatically during upgrade to the major Container Cloud release 2.24.0 or to the patch Container Cloud release 2.23.3 if you obtain patch releases.

Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.23.0. For major components and versions of the Cluster release introduced in 2.23.0, see Cluster release 11.7.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

Bare metal

ambassador Updated

1.23.3-alpine

baremetal-operator Updated

base-focal-20230126095055

baremetal-public-api Updated

1.36.3

baremetal-provider Updated

1.36.5

baremetal-resource-controller Updated

base-focal-20230130170757

ironic Updated

yoga-focal-20230130125656

kaas-ipam Updated

base-focal-20230127092754

keepalived

0.19.0-5-g6a7e17d

local-volume-provisioner Updated

2.5.0-1

mariadb

10.6.7-focal-20221028120155

metallb-controller

0.13.7

IAM

iam Updated

2.4.38

iam-controller Updated

1.36.3

keycloak

18.0.0

Container Cloud Updated

admission-controller

1.36.3

agent-controller

1.36.3

byo-credentials-controller

1.36.3

byo-provider

1.36.3

ceph-kcc-controller

1.36.3

cert-manager

1.36.3

client-certificate-controller

1.36.3

event-controller

1.36.3

golang

1.18.10

kaas-public-api

1.36.3

kaas-exporter

1.36.3

kaas-ui

1.36.3

license-controller

1.36.3

lcm-controller

1.36.3

machinepool-controller

1.36.3

metrics-server

0.5.2

mcc-cache

1.36.3

portforward-controller

1.36.3

proxy-controller

1.36.3

rbac-controller

1.36.3

release-controller

1.36.3

rhellicense-controller

1.36.3

scope-controller

1.36.3

user-controller

1.36.3

OpenStack Updated

openstack-provider

1.36.3

os-credentials-controller

1.36.3

VMware vSphere

metallb-controller

0.13.7

vsphere-provider Updated

1.36.3

vsphere-credentials-controller Updated

1.36.3

keepalived

0.19.0-5-g6a7e17d

squid-proxy Updated

0.0.1-8

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.23.0. For artifacts of the Cluster release introduced in 2.23.0, see Cluster release 11.7.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-api Updated

https://binary.mirantis.com/core/helm/baremetal-api-1.36.3.tgz

baremetal-operator Updated

https://binary.mirantis.com/core/helm/baremetal-operator-1.36.3.tgz

baremetal-public-api Updated

https://binary.mirantis.com/core/helm/baremetal-public-api-1.36.3.tgz

ironic-python-agent.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20230126190304

ironic-python-agent.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20230126190304

kaas-ipam Updated

https://binary.mirantis.com/core/helm/kaas-ipam-1.36.3.tgz

local-volume-provisioner Updated

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.36.3.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.36.3.tgz

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-104-6e2e82c.tgz

Docker images

ambassador Updated

mirantis.azurecr.io/general/external/docker.io/library/nginx:1.23.3-alpine

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-focal-20230126095055

baremetal-resource-controller Updated

mirantis.azurecr.io/bm/baremetal-resource-controller:base-focal-20230130170757

dynamic_ipxe Updated

mirantis.azurecr.io/bm/dynamic-ipxe:base-focal-20230126202529

dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-alpine-20230118150429

dnsmasq-controller Updated

mirantis.azurecr.io/bm/dnsmasq-controller:base-focal-20230213185438

ironic Updated

mirantis.azurecr.io/openstack/ironic:yoga-focal-20230130125656

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:yoga-focal-20230130125656

ironic-prometheus-exporter Updated

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20221227163037

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-focal-20230127092754

mariadb

mirantis.azurecr.io/general/mariadb:10.6.7-focal-20221028120155

metallb-controller

mirantis.azurecr.io/bm/external/metallb/controller:v0.13.7-20221130155702

metallb-speaker

mirantis.azurecr.io/bm/external/metallb/speaker:v0.13.7-20221130155702

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.19.0-5-g6a7e17d

syslog-ng Updated

mirantis.azurecr.io/bm/syslog-ng:base-focal-20230126094812


Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.36.4.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.36.4.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.36.3.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.36.3.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.36.5.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.36.3.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.36.3.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.36.3.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.36.3.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.36.3.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.36.3.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.36.3.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.36.3.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.36.3.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.36.3.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.36.3.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.36.3.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.36.3.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.36.3.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.36.3.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.36.3.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.36.3.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.36.3.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.36.3.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.36.3.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.36.3.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.36.3.tgz

scope-controller

http://binary.mirantis.com/core/helm/scope-controller-1.36.3.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.36.3.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.36.3.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.36.3.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.36.3.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.36.3

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.36.3

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.36.3

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.36.3

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.36.3

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.6.1

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.36.3

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.36.3

frontend Updated

mirantis.azurecr.io/core/frontend:1.36.3

haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.17.0-8-g6ca89d5

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.36.3

kaas-exporter

mirantis.azurecr.io/core/kaas-exporter:1.36.3

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.36.3

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.36.3

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.36.3

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.36.3

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.19.0-5-g6a7e17d

metrics-server

mirantis.azurecr.io/core/external/metrics-server:v0.5.2

nginx

mirantis.azurecr.io/core/external/nginx:1.36.3

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.36.3

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.36.3

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.36.3

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.36.3

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.36.3

registry

mirantis.azurecr.io/lcm/registry:2.8.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.36.3

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.36.3

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.36.3

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-8

storage-discovery Deprecated

mirantis.azurecr.io/core/storage-discovery:1.36.3

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.36.3

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.36.3

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.36.3


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

Helm charts

iam Updated

http://binary.mirantis.com/iam/helm/iam-2.4.38.tgz

iam-proxy Updated

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.14.tgz

keycloak_proxy Removed

n/a

Docker images

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.0-20200311160233

mariadb

mirantis.azurecr.io/general/mariadb:10.6.7-focal-20220811085105

keycloak Updated

mirantis.azurecr.io/iam/keycloak:0.5.14

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-3

Security notes

The table below contains the number of vendor-specific addressed CVEs with Critical or High severity.

In total, in the current Container Cloud release, 212 CVEs have been fixed and 16 artifacts (images) updated.

Addressed CVEs

Fixed CVE ID

# of updated artifacts

RHSA-2022:6206

20

RHSA-2022:4991

11

RHSA-2023:0838

8

RHSA-2022:7089

8

RHSA-2022:1065

8

RHSA-2022:0332

8

RHSA-2021:5082

8

RHSA-2021:2717

8

RHSA-2022:8638

7

RHSA-2022:6878

7

RHSA-2022:1642

7

RHSA-2022:0951

7

RHSA-2022:0658

7

RHSA-2022:1537

6

RHSA-2021:4903

5

RHSA-2020:3014

5

RHSA-2019:4114

5

RHSA-2022:6778

4

RHSA-2020:0575

4

RHSA-2022:5095

3

RHSA-2021:2359

3

RHSA-2023:0284

2

RHSA-2022:5056

2

RHSA-2022:4799

2

RHSA-2021:1206

2

RHSA-2019:0997

2

RHSA-2022:7192

1

RHSA-2021:2170

1

RHSA-2021:1989

1

RHSA-2021:1024

1

RHSA-2021:0670

1

RHSA-2020:5476

1

RHSA-2020:3658

1

RHSA-2020:2755

1

RHSA-2020:2637

1

RHSA-2020:2338

1

RHSA-2020:0902

1

RHSA-2020:0273

1

RHSA-2020:0271

1

RHSA-2019:2692

1

RHSA-2019:1714

1

RHSA-2019:1619

1

RHSA-2019:1145

1

CVE-2021-33574

18

CVE-2022-2068

7

CVE-2022-1664

7

CVE-2022-1292

7

CVE-2022-29155

6

CVE-2019-25013

6

CVE-2022-0778

5

CVE-2022-23219

4

CVE-2022-23218

4

CVE-2019-20916

4

CVE-2022-24407

3

CVE-2022-32207

2

CVE-2022-27404

2

CVE-2022-40023

1

CVE-2022-1941

1

CVE-2021-32839

1

CVE-2021-3711

1

CVE-2021-3517

1

ALAS2-2023-1915

1

ALAS2-2023-1911

1

ALAS2-2023-1908

1

ALAS2-2022-1902

2

ALAS2-2022-1885

1

The full list of the CVEs present in the current Container Cloud release is available at the Mirantis Security Portal.

2.22.0

The Mirantis Container Cloud GA release 2.22.0:

  • Introduces support for the Cluster release 11.6.0 that is based on Mirantis Container Runtime 20.10.13 and Mirantis Kubernetes Engine 3.5.5 with Kubernetes 1.21.

  • Supports the Cluster release 12.5.0 that is based on the Cluster release 11.5.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 22.5.

  • Does not support greenfield deployments on deprecated Cluster releases 11.5.0 and 8.10.0. Use the latest available Cluster releases of the series instead.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.22.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.22.0. For the list of enhancements in the Cluster release 11.6.0 that is introduced by the Container Cloud release 2.22.0, see the Cluster releases (managed).

The ‘rebootRequired’ notification in the baremetal-based machine status

Added the rebootRequired field to the status of a Machine object for the bare metal provider. This field indicates whether a manual host reboot is required to complete the Ubuntu operating system updates, if any.

You can view this notification either using the Container Cloud API or web UI:

  • API: reboot.required.true in status:providerStatus of a Machine object

  • Web UI: the One or more machines require a reboot notification on the Clusters and Machines pages

Note

For MOSK-based deployments, the feature support is available since MOSK 23.1.

Custom network configuration for managed clusters based on Equinix Metal with private networking

TechPreview

Implemented the ability to configure advanced network settings on managed clusters that are based on Equinix Metal with private networking. Using the custom parameter in the Cluster object, you can customize network configuration for the cluster machines. The feature comprises usage of dedicated Subnet and L2Template objects that contain necessary configuration for cluster machines.

Custom TLS certificates for the StackLight ‘iam-proxy’ endpoints

Implemented the ability to set up custom TLS certificates for the following StackLight iam-proxy endpoints on any type of Container Cloud clusters:

  • iam-proxy-alerta

  • iam-proxy-alertmanager

  • iam-proxy-grafana

  • iam-proxy-kibana

  • iam-proxy-prometheus

Cluster deployment and update history objects

Implemented the following Container Cloud objects describing the history of a cluster and machine deployment and update:

  • ClusterDeploymentStatus

  • ClusterUpgradeStatus

  • MachineDeploymentStatus

  • MachineUpgradeStatus

Using these objects, you can inspect cluster and machine deployment and update stages, their time stamps, statuses, and failure messages, if any. In the Container Cloud web UI, use the History option located under the More action icon of a cluster and machine.

For existing clusters, these objects become available after the management cluster upgrade to Container Cloud 2.22.0.

Extended logging format for essential management cluster components

Extended the logging format for the admission-controller, storage-discovery, and all supported <providerName>-provider services of a management cluster. Now, log records for these services contain the following entries:

level:<debug,info,warn,error,panic>,
ts:<YYYY-MM-DDTHH:mm:ssZ>,
logger:<providerType>.<objectName>.req:<requestID>,
caller:<lineOfCode>,
msg:<message>,
error:<errorMessage>,
stacktrace:<codeInfo>
Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.22.0 along with the Cluster release 11.6.0:

  • [27192] Fixed the issue that prevented portforward-controller from accepting new connections correctly.

  • [26659] Fixed the issue that caused the deployment of a regional cluster based on bare metal or Equinix Metal with private networking to fail with mcc-cache Pods being stuck in the CrashLoopBackOff status of restarts.

  • [28783] Fixed the issue with Ceph condition getting stuck in absence of the Ceph cluster secrets information on the MOSK 22.3 clusters.

    Caution

    Starting from MOSK 22.4, the Ceph cluster version updates to 15.2.17. Therefore, if you applied the workaround for MOSK 22.3 described in Ceph known issue 28783, remove the version parameter definition from KaaSCephCluster after the managed cluster update to MOSK 22.4.

  • [26820] Fixed the issue with the status section in the KaaSCephCluster.status CR not reflecting issues during a Ceph cluster deletion.

  • [25624] Fixed the issue with inability to specify the Ceph pool API parameters by adding the parameters option that specifies the key-value map for the parameters of the Ceph pool.

    Caution

    For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

  • [28526] Fixed the issue with a low CPU limit 100m for kaas-exporter blocking metric collection.

  • [28134] Fixed the issue with failure to update a cluster with nodes being stuck in the Prepare state due to error when evicting pods for Patroni.

  • [27732-1] Fixed the issue with the OpenSearch elasticsearch.persistentVolumeClaimSize custom setting being overwritten by logging.persistentVolumeClaimSize during deployment of a Container Cloud cluster of any type and be set to the default 30Gi.

    Depending on available resources on existing clusters that were affected by the issue, additional actions may be required after an update to Container Cloud 2.22.0. For details, see OpenSearchPVCMismatch alert raises due to the OpenSearch PVC size mismatch. New clusters deployed on top of Container Cloud 2.22.0 are not affected.

  • [27732-2] Fixed the issue with custom settings for the deprecated elasticsearch.logstashRetentionTime parameter being overwritten by the default setting set to 1 day.

  • [20876] Fixed the issue with StackLight Pods getting stuck with the Pod predicate NodeAffinity failed error due to the StackLight node label added to one machine and then removed from another one.

  • [28651] Updated Telemeter for StackLight to fix the discovered vulnerabilities.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.22.0 including the Cluster release 11.6.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


Bare metal
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.

[20736] Region deletion failure after regional deployment failure

If a baremetal-based regional cluster deployment fails before pivoting is done, the corresponding region deletion fails.

Workaround:

Using the command below, manually delete all possible traces of the failed regional cluster deployment, including but not limited to the following objects that contain the kaas.mirantis.com/region label of the affected region:

  • cluster

  • machine

  • baremetalhost

  • baremetalhostprofile

  • l2template

  • subnet

  • ipamhost

  • ipaddr

kubectl delete <objectName> -l kaas.mirantis.com/region=<regionName>

Warning

Do not use the same region name again after the regional cluster deployment failure since some objects that reference the region name may still exist.



Equinix Metal with private networking
[29296] Deployment of a managed cluster fails during provisioning

Deployment of a managed cluster based on Equinix Metal with private networking fails during provisioning with the following error:

InspectionError: Failed to obtain hardware details.
Ensure DHCP relay is up and running

Workaround:

  1. In deployment/dnsmasq, udate the image tag version for the dhcpd container to base-alpine-20230118150429:

    kubectl -n kaas edit deployment/dnsmasq
    
  2. In dnsmasq.conf, override the default undionly.kpxe with the ipxe.pxe one:

    kubectl -n kaas edit cm dnsmasq-config
    

    Example of existing configuration:

    dhcp-boot=/undionly.kpxe,httpd-http.ipxe.boot.local,dhcp-lb.ipxe.boot.local
    

    Example of new configuration:

    dhcp-boot=/ipxe.pxe,httpd-http.ipxe.boot.local,dhcp-lb.ipxe.boot.local
    

vSphere
[29647] The ‘Network prepared’ stage of cluster deployment never succeeds

Fixed in 11.7.0

During deployment of a vSphere-based management or regional cluster with IPAM disabled, the Network prepared stage gets stuck in the NotStarted status. The issue does not affect cluster deployment. Therefore, disregard the error message.


LCM
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>
[27797] A cluster ‘kubeconfig’ stops working during MKE minor version update

During update of a Container Cloud cluster of any type, if the MKE minor version is updated from 3.4.x to 3.5.x, access to the cluster using the existing kubeconfig fails with the You must be logged in to the server (Unauthorized) error due to OIDC settings being reconfigured.

As a workaround, during the cluster update process, use the admin kubeconfig instead of the existing one. Once the update completes, you can use the existing cluster kubeconfig again.

To obtain the admin kubeconfig:

kubectl --kubeconfig <pathToMgmtKubeconfig> get secret -n <affectedClusterNamespace> \
-o yaml <affectedClusterName>-kubeconfig | awk '/admin.conf/ {print $2}' | \
head -1 | base64 -d > clusterKubeconfig.yaml

If the related cluster is regional, replace <pathToMgmtKubeconfig> with <pathToRegionalKubeconfig>.


TLS configuration
[29604] The ‘failed to get kubeconfig’ error during TLS configuration

Fixed in 14.0.0(1) and 15.0.1

When setting a new Transport Layer Security (TLS) certificate for a cluster, the false positive failed to get kubeconfig error may occur on the Waiting for TLS settings to be applied stage. No actions are required. Therefore, disregard the error.

To verify the status of the TLS configuration being applied:

kubectl get cluster <ClusterName> -n <ClusterProjectName> -o jsonpath-as-json="{.status.providerStatus.tls.<Application>}"

Possible values for the <Application> parameter are as follows:

  • keycloak

  • ui

  • cache

  • mke

  • iamProxyAlerta

  • iamProxyAlertManager

  • iamProxyGrafana

  • iamProxyKibana

  • iamProxyPrometheus

Example of system response:

[
    {
        "expirationTime": "2024-01-06T09:37:04Z",
        "hostname": "domain.com",
    }
]

In this example, expirationTime equals the NotAfter field of the server certificate. And the value of hostname contains the configured application name.


StackLight
[30040] OpenSearch is not in the ‘deployed’ status during cluster update

Fixed in 11.7.0 and 12.7.0

Note

The issue may affect the Container Cloud or Cluster release update to the following versions:

  • 2.22.0 for management and regional clusters

  • 11.6.0 for management, regional, and managed clusters

  • 13.2.5, 13.3.5, 13.4.3, and 13.5.2 for attached MKE clusters

The issue does not affect clusters originally deployed since the following Cluster releases: 11.0.0, 8.6.0, 7.6.0.

During cluster update to versions mentioned in the note above, the following OpenSearch-related error may occur on clusters that were originally deployed or attached using Container Cloud 2.15.0 or earlier, before the transition from Elasticsearch to OpenSearch:

The stacklight/opensearch release of the stacklight/stacklight-bundle HelmBundle
reconciled by the stacklight/stacklight-helm-controller Controller
is not in the "deployed" status for the last 15 minutes.

The issue affects clusters with elasticsearch.persistentVolumeClaimSize configured for values other than 30Gi.

To verify that the cluster is affected:

  1. Verify whether the HelmBundleReleaseNotDeployed alert for the opensearch release is firing. If so, the cluster is most probably affected. Otherwise, the cluster is not affected.

  2. Verify the reason of the HelmBundleReleaseNotDeployed alert for the opensearch release:

    kubectl get helmbundle stacklight-bundle -n stacklight -o json | jq '.status.releaseStatuses[] | select(.chart == "opensearch") | .message'
    

    Example system response from the affected cluster:

    Upgrade "opensearch" failed: cannot patch "opensearch-master" with kind StatefulSet: \
    StatefulSet.apps "opensearch-master" is invalid: spec: Forbidden: \
    updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden
    

Workaround:

  1. Scale down the opensearch-dashboards and metricbeat resources to 0:

    kubectl -n stacklight scale --replicas 0 deployment opensearch-dashboards && \
    kubectl -n stacklight get pods -l app=opensearch-dashboards | awk '{if (NR!=1) {print $1}}' | xargs -r \
    kubectl -n stacklight wait --for=delete --timeout=10m pod
    
    kubectl -n stacklight scale --replicas 0 deployment metricbeat && \
    kubectl -n stacklight get pods -l app=metricbeat | awk '{if (NR!=1) {print $1}}' | xargs -r \
    kubectl -n stacklight wait --for=delete --timeout=10m pod
    

    Wait for the commands in this and next step to complete. The completion time depends on the cluster size.

  2. Disable the elasticsearch-curator CronJob:

    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec": {"suspend": true}}'
    
  3. Scale down the opensearch-master StatefulSet:

    kubectl -n stacklight scale --replicas 0 statefulset opensearch-master && \
    kubectl -n stacklight get pods -l app=opensearch-master | awk '{if (NR!=1) {print $1}}' | xargs -r \
    kubectl -n stacklight wait --for=delete --timeout=30m pod
    
  4. Delete the OpenSearch Helm release:

    helm uninstall --no-hooks opensearch -n stacklight
    
  5. Wait up to 5 minutes for Helm Controller to retry the upgrade and properly create the opensearch-master StatefulSet.

    To verify readiness of the opensearch-master Pods:

    kubectl -n stacklight wait --for=condition=Ready --timeout=30m pod -l app=opensearch-master
    

    Example of a successful system response in an HA setup:

    pod/opensearch-master-0 condition met
    pod/opensearch-master-1 condition met
    pod/opensearch-master-2 condition met
    

    Example of a successful system response in an non-HA setup:

    pod/opensearch-master-0 condition met
    
  6. Scale up the opensearch-dashboards and metricbeat resources:

    kubectl -n stacklight scale --replicas 1 deployment opensearch-dashboards && \
    kubectl -n stacklight wait --for=condition=Ready --timeout=10m pod -l app=opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 1 deployment metricbeat && \
    kubectl -n stacklight wait --for=condition=Ready --timeout=10m pod -l app=metricbeat
    
  7. Enable the elasticsearch-curator CronJob:

    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec": {"suspend": false}}'
    
[29329] Recreation of the Patroni container replica is stuck

Fixed in 11.7.0 and 12.7.0

During an update of a Container Cloud cluster of any type, recreation of the Patroni container replica is stuck in the degraded state due to the liveness probe killing the container that runs the pg_rewind procedure. The issue affects clusters on which the pg_rewind procedure takes more time than the full cycle of the liveness probe.

The sample logs of the affected cluster:

INFO: doing crash recovery in a single user mode
ERROR: Crash recovery finished with code=-6
INFO:  stdout=
INFO:  stderr=2023-01-11 10:20:34 GMT [64]: [1-1] 63be8d72.40 0     LOG:  database system was interrupted; last known up at 2023-01-10 17:00:59 GMT
[64]: [2-1] 63be8d72.40 0  LOG:  could not read from log segment 00000002000000000000000F, offset 0: read 0 of 8192
[64]: [3-1] 63be8d72.40 0  LOG:  invalid primary checkpoint record
[64]: [4-1] 63be8d72.40 0  PANIC:  could not locate a valid checkpoint record

Workaround:

For the affected replica and PVC, run:

kubectl delete persistentvolumeclaim/storage-volume-patroni-<replica-id> -n stacklight

kubectl delete pod/patroni-<replica-id> -n stacklight
[28822] Reference Application triggers alerts during its upgrade

Fixed in 11.7.0 and 12.7.0

On managed clusters with enabled Reference Application, the following alerts are triggered during a managed cluster update from the Cluster release 11.5.0 to 11.6.0 or 7.11.0 to 11.5.0:

  • KubeDeploymentOutage for the refapp Deployment

  • RefAppDown

  • RefAppProbeTooLong

  • RefAppTargetDown

This behavior is expected, no actions are required. Therefore, disregard these alerts.

[28479] Increase of the ‘metric-collector’ Pod restarts due to OOM

Fixed in 11.7.0 and 12.7.0

On the baremetal-based management clusters, the restarts count of the metric-collector Pod is increased in time with reason: OOMKilled in the containerStatuses of the metric-collector Pod. Only clusters with HTTP proxy enabled are affected.

Such behavior is expected. Therefore, disregard these restarts.

[28373] Alerta can get stuck after a failed initialization

Fixed in 11.7.0 and 12.7.0

During creation of a Container Cloud cluster of any type with StackLight enabled, Alerta can get stuck after a failed initialization with only 1 Pod in the READY state. For example:

kubectl get po -n stacklight -l app=alerta

NAME                          READY   STATUS    RESTARTS   AGE
pod/alerta-5f96b775db-45qsz   1/1     Running   0          20h
pod/alerta-5f96b775db-xj4rl   0/1     Running   0          20h

Workaround:

  1. Recreate the affected Alerta Pod:

    kubectl --kubeconfig <affectedClusterKubeconfig> -n stacklight delete pod <stuckAlertaPodName>
    
  2. Verify that both Alerta Pods are in the READY state:

    kubectl get po -n stacklight -l app=alerta
    
[20876] StackLight pods get stuck with the ‘NodeAffinity failed’ error

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshoot StackLight.

On a managed cluster, the StackLight pods may get stuck with the Pod predicate NodeAffinity failed error in the pod status. The issue may occur if the StackLight node label was added to one machine and then removed from another one.

The issue does not affect the StackLight services, all required StackLight pods migrate successfully except extra pods that are created and stuck during pod migration.

As a workaround, remove the stuck pods:

kubectl --kubeconfig <managedClusterKubeconfig> -n stacklight delete pod <stuckPodName>
Ceph
[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.

Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.22.0. For major components and versions of the Cluster release introduced in 2.22.0, see Cluster release 11.6.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.35.11

aws-credentials-controller

1.35.11

Azure Updated

azure-provider

1.35.11

azure-credentials-controller

1.35.11

Bare metal

ambassador

1.20.1-alpine

baremetal-operator Updated

base-focal-20221130142939

baremetal-public-api Updated

1.35.11

baremetal-provider Updated

1.35.11

baremetal-resource-controller

base-focal-20221219124546

ironic Updated

yoga-focal-20221118093824

kaas-ipam

base-focal-20221202191902

keepalived

0.19.0-5-g6a7e17d

local-volume-provisioner Updated

2.5.0-1

mariadb Updated

10.6.7-focal-20221028120155

metallb-controller Updated

0.13.7

IAM

iam Updated

2.4.36

iam-controller Updated

1.35.11

keycloak

18.0.0

Container Cloud Updated

admission-controller

1.35.12

agent-controller

1.35.11

byo-credentials-controller

1.35.11

byo-provider

1.35.11

ceph-kcc-controller

1.35.11

cert-manager

1.35.11

client-certificate-controller

1.35.11

event-controller

1.35.11

golang Updated

1.18.8

kaas-public-api

1.35.11

kaas-exporter

1.35.11

kaas-ui

1.35.11

license-controller

1.35.11

lcm-controller

0.3.0-352-gf55d6378

machinepool-controller

1.35.11

mcc-cache

1.35.11

metrics-server

0.5.2

portforward-controller

1.35.11

proxy-controller

1.35.11

rbac-controller

1.35.11

release-controller

1.35.11

rhellicense-controller

1.35.11

scope-controller

1.35.11

user-controller

1.35.11

Equinix Metal

equinix-provider Updated

1.35.11

equinix-credentials-controller Updated

1.35.11

keepalived

0.19.0-5-g6a7e17d

OpenStack Updated

openstack-provider

1.35.11

os-credentials-controller

1.35.11

VMware vSphere

metallb-controller

0.13.7

vsphere-provider Updated

1.35.11

vsphere-credentials-controller Updated

1.35.11

keepalived

0.19.0-5-g6a7e17d

squid-proxy Updated

0.0.1-8

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.22.0. For artifacts of the Cluster release introduced in 2.22.0, see Cluster release 11.6.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-api Updated

https://binary.mirantis.com/core/helm/baremetal-api-1.35.11.tgz

baremetal-operator Updated

https://binary.mirantis.com/core/helm/baremetal-operator-1.35.11.tgz

baremetal-public-api Updated

https://binary.mirantis.com/core/helm/baremetal-public-api-1.35.11.tgz

ironic-python-agent.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20221228205257

ironic-python-agent.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20221228205257

kaas-ipam Updated

https://binary.mirantis.com/core/helm/kaas-ipam-1.35.11.tgz

local-volume-provisioner Updated

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.35.11.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.35.11.tgz

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-104-6e2e82c.tgz

Docker images

ambassador

mirantis.azurecr.io/general/external/docker.io/library/nginx:1.20.1-alpine

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-focal-20221130142939

baremetal-resource-controller Updated

mirantis.azurecr.io/bm/baremetal-resource-controller:base-focal-20221219124546

dynamic_ipxe Updated

mirantis.azurecr.io/bm/dynamic-ipxe:base-focal-20221219135753

dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-alpine-20221121215534

dnsmasq-controller Updated

mirantis.azurecr.io/bm/dnsmasq-controller:base-focal-20221219112845

ironic Updated

mirantis.azurecr.io/openstack/ironic:yoga-focal-20221118093824

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:yoga-focal-20221118093824

ironic-prometheus-exporter Updated

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20221117115942

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-focal-20221202191902

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.7-focal-20221028120155

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.19.0-5-g6a7e17d

metallb-controller Updated

mirantis.azurecr.io/bm/external/metallb/controller:v0.13.7-20221130155702

metallb-speaker Updated

mirantis.azurecr.io/bm/external/metallb/speaker:v0.13.7-20221130155702

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-focal-20220128103433


Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.35.11.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.35.11.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.35.11.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.35.11.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.35.11.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.35.11.tgz

azure-credentials-controller

https://binary.mirantis.com/core/helm/azure-credentials-controller-1.35.11.tgz

azure-provider

https://binary.mirantis.com/core/helm/azure-provider-1.35.11.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.35.11.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.35.11.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.35.11.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.35.11.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.35.11.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.35.11.tgz

configuration-collector

https://binary.mirantis.com/core/helm/configuration-collector-1.35.11.tgz

equinix-credentials-controller

https://binary.mirantis.com/core/helm/equinix-credentials-controller-1.35.11.tgz

equinix-provider

https://binary.mirantis.com/core/helm/equinix-provider-1.35.11.tgz

equinixmetalv2-provider

https://binary.mirantis.com/core/helm/equinixmetalv2-provider-1.35.11.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.35.11.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.35.11.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.35.11.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.35.11.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.35.11.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.35.11.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.35.11.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.35.11.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.35.11.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.35.11.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.35.11.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.35.11.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.35.11.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.35.11.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.35.11.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.35.11.tgz

scope-controller

http://binary.mirantis.com/core/helm/scope-controller-1.35.11.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.35.11.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.35.11.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.35.11.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.35.11.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.35.11

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.35.11

aws-cluster-api-controller Updated

mirantis.azurecr.io/core/aws-cluster-api-controller:1.35.11

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.35.11

azure-cloud-controller-manager New

mirantis.azurecr.io/lcm/external/azure-cloud-controller-manager:v1.23.11

azure-cloud-node-manager New

mirantis.azurecr.io/lcm/external/azure-cloud-node-manager:v1.23.11

azure-cluster-api-controller Updated

mirantis.azurecr.io/core/azure-cluster-api-controller:1.35.11

azure-credentials-controller Updated

mirantis.azurecr.io/core/azure-credentials-controller:1.35.11

azuredisk-csi New

mirantis.azurecr.io/lcm/azuredisk-csi-driver:v0.20.0-25-gfaef237

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.35.11

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.35.11

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.35.11

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.6.1

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.35.11

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.35.11

cluster-api-provider-equinix Updated

mirantis.azurecr.io/core/equinix-cluster-api-controller:1.35.11

equinix-credentials-controller Updated

mirantis.azurecr.io/core/equinix-credentials-controller:1.35.11

frontend Updated

mirantis.azurecr.io/core/frontend:1.35.11

haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.17.0-8-g6ca89d5

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.35.11

kaas-exporter

mirantis.azurecr.io/core/kaas-exporter:1.35.11

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.35.11

lcm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-352-gf55d6378

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.35.11

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.35.11

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.19.0-5-g6a7e17d

metrics-server

mirantis.azurecr.io/core/external/metrics-server:v0.5.2

nginx

mirantis.azurecr.io/core/external/nginx:1.35.11

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.35.11

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.35.11

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.35.11

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.35.11

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.35.11

registry Updated

mirantis.azurecr.io/lcm/registry:2.8.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.35.11

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.35.11

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.35.11

squid-proxy Updated

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-8

storage-discovery

mirantis.azurecr.io/core/storage-discovery:1.35.11

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.35.11

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.35.11

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.35.11


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

Helm charts

iam Updated

http://binary.mirantis.com/iam/helm/iam-2.4.36.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.13.tgz

keycloak_proxy Updated

http://binary.mirantis.com/core/helm/keycloak_proxy-1.35.11.tgz

Docker images

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.0-20200311160233

mariadb

mirantis.azurecr.io/general/mariadb:10.6.7-focal-20220811085105

keycloak Updated

mirantis.azurecr.io/iam/keycloak:0.5.13

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-3

Security notes

The table below contains the number of vendor-specific addressed CVEs with Critical or High severity.

In total, in the current Container Cloud release, 6 CVEs have been fixed and 4 artifacts (images) updated.

Addressed CVEs

Fixed CVE ID

# of updated artifacts

CVE-2022-40023

2

CVE-2022-25236

1

CVE-2022-25235

1

RHSA-2022:8638

1

RHSA-2022:7089

1

RHSA-2022:6878

1

The full list of the CVEs present in the current Container Cloud release is available at the Mirantis Security Portal.

Releases delivered in 2022

This section contains historical information on the unsupported Container Cloud releases delivered in 2022. For the latest supported Container Cloud release, see Container Cloud releases.

Unsupported Container Cloud releases 2022

Version

Release date

Summary

2.21.1

Dec 19, 2022

Based on 2.21.0, Container Cloud 2.21.1:

  • Introduces the Cluster release 12.5.0 that is based on 11.5.0 and supports Mirantis OpenStack for Kubernetes (MOSK) 22.5.

  • Supports the Cluster releases 11.5.0 and 7.11.0. The deprecated Cluster releases 11.4.0, 8.10.0, and 7.10.0 are not supported for new deployments.

  • Contains features and amendments of the parent release 2.21.0.

2.21.0

Nov 22, 2022

  • MKE patch releases update from 3.4.10 to 3.4.11 and from 3.5.4 to 3.5.5

  • MCR patch release update from 20.10.12 to 20.10.13

  • MetalLB minor version update from 0.12.1 to 0.13.4

  • BareMetalHostCredential CR

  • Dnsmasq configuration enhancements

  • Combining router and seed node settings on a single Equinix Metal server

  • Graceful machine deletion

  • Container Cloud web UI support for custom Docker registries

  • Enhanced etcd monitoring

  • Reference Application for workload monitoring

  • Ceph secrets specification in the Ceph cluster status

  • Amazon S3 bucket policies for Ceph Object Storage users

  • Documentation: Firewall configuration

2.20.1

Sep 29, 2022

Based on 2.20.0, Container Cloud 2.20.1:

  • Introduces the Cluster release 8.10.0 that is based on 7.10.0 and supports Mirantis OpenStack for Kubernetes (MOSK) 22.4.

  • Supports the Cluster releases 7.10.0 and 11.4.0. The deprecated Cluster releases 11.3.0, 8.8.0, and 7.9.0 are not supported for new deployments.

  • Contains features and amendments of the parent release 2.20.0.

2.20.0

Sep 5, 2022

  • MKE and MCR versions update

  • Configuration of TLS certificates for mcc-cache and MKE

  • General availability support for MITM proxy

  • Bastion node configuration for OpenStack and AWS managed clusters

  • New member role for IAM

  • Bare metal:

    • Mandatory IPAM service label for bare metal LCM subnets

    • Flexible size units for bare metal host profiles

  • Ceph:

    • Ceph removal from management and regional clusters

    • Creation of Ceph RADOS Gateway users

    • Custom RBD map options

    • Ceph Manager modules configuration

    • Ceph daemons health check configuration

2.19.0

July 27, 2022

  • Modification of network configuration on existing machines

  • New format of log entries on management clusters

  • Extended and basic versions of logs

  • Removal of Helm v2 support in Helm Controller

  • StackLight:

    • Kubernetes Containers Grafana dashboard

    • Improvements to alerting

  • Ceph:

    • Ceph OSD removal or replacement by ID

    • Multiple Ceph data pools per CephFS

  • Container Cloud web UI:

    • Upgrade order for machines

    • Booting an OpenStack machine from a volume

    • Distribution selector for bare metal machines

    • Elasticsearch switch to OpenSearch

    • Ceph cluster summary

2.18.1

June 30, 2022

Based on 2.18.0, Container Cloud 2.18.1:

  • Introduces the Cluster release 8.8.0 that is based on 7.8.0 and supports Mirantis OpenStack for Kubernetes (MOSK) 22.3.

  • Supports the Cluster releases 7.8.0 and 11.2.0. The deprecated Cluster releases 11.1.0, 8.6.0, and 7.7.0 are not supported for new deployments.

  • Contains features and amendments of the parent release 2.18.0.

2.18.0

June 13, 2022

  • MKE and MCR version update

  • Ubuntu kernel update for bare metal clusters

  • Support for Ubuntu 20.04 on greenfield vSphere deployments

  • Booting a machine from a block storage volume for OpenStack provider

  • IPSec encryption for Kubernetes networking

  • Support for MITM proxy

  • Support for custom Docker registries

  • Upgrade sequence for machines

  • Deprecation of public network mode on the Equinix Metal based deployments

  • Enablement of Salesforce propagation to all clusters using web UI

  • StackLight:

    • Elasticsearch switch to OpenSearch

    • Improvements to StackLight alerting

    • Prometheus remote write

    • StackLight mandatory parameters

  • Ceph daemons placement

  • Documentation enhancements

2.17.0

May 11, 2022

  • General availability for Ubuntu 20.04 on greenfield deployments

  • EBS instead of NVMe as persistent storage for AWS-based nodes

  • Container Cloud on top of MOSK Victoria with Tungsten Fabric

  • MKE 3.5.1 for management and regional clusters

  • Manager nodes deletion on all cluster types

  • Automatic propagation of Salesforce configuration to all clusters

  • Custom values for node labels

  • Machine pools

  • StackLight:

    • Elasticsearch retention time per index

    • Helm controller monitoring

  • Ceph:

    • Configurable timeouts for Ceph requests

    • Configurable replicas count for Ceph controllers

    • KaaSCephCluster controller

2.16.1

Apr 14, 2022

Based on 2.16.0, Container Cloud 2.16.1:

  • Introduces the Cluster release 8.6.0 that is based on 7.6.0 and supports Mirantis OpenStack for Kubernetes (MOSK) 22.2.

  • Supports the Cluster releases 7.6.0 and 11.0.0. The deprecated Cluster releases 8.5.0, 7.5.0, and 5.22.0 are not supported for new deployments.

  • Contains features and amendments of the parent release 2.16.0

2.16.0

Mar 31, 2022

  • Support for MKE 3.5.1 and MKE version update from 3.4.6 to 3.4.7

  • Automatic renewal of internal TLS certificates

  • Keepalived for built-in load balancing in standalone containers

  • Reworked ‘Reconfigure’ phase of LCMMachine

  • Bare metal provider:

    • Ubuntu 20.04 for greenfield bare metal managed cluster

    • Additional regional cluster on bare metal

    • MOSK on local RAID devices

    • Any interface name for bare metal LCM network

  • StackLight:

    • Improvements to StackLight alerting

    • Elasticsearch retention time per index

    • Prometheus Blackbox Exporter configuration

    • Custom Prometheus scrape configurations

    • Elasticsearch switch to OpenSearch

  • Container Cloud web UI:

    • License management

    • Scheduling of a management cluster upgrade

2.15.1

Feb 23, 2022

Based on 2.15.0, this release introduces the Cluster release 8.5.0 that is based on 5.22.0 and supports Mirantis OpenStack for Kubernetes (MOSK) 22.1.

For the list of Cluster releases 7.x and 5.x that are supported by 2.15.1 as well as for its features with addressed and known issues, refer to the parent release 2.15.0.

2.15.0

Jan 31, 2022

  • MCR version update from 20.10.6 to 20.10.8

  • Scheduled Container Cloud auto-upgrade

  • Cluster and machine maintenance mode

  • Improvements for monitoring of machine deployment live status

  • Deprecation of iam-api and IAM CLI

  • HAProxy instead of NGINX for vSphere, Equinix Metal, and bare metal providers

  • Additional regional cluster on Equinix Metal with private networking as Technology Preview

  • Bare metal:

    • Automatic upgrade of bare metal host operating system during cluster update

    • Dedicated subnet for externally accessible Kubernetes API endpoint

  • Ceph:

    • Automated Ceph LCM

    • Ceph CSI provisioner tolerations and node affinity

    • KaaSCephCluster.status enhancement

    • Shared File System (CephFS)

    • Switch of Ceph Helm releases from v2 to v3

  • StackLight:

    • Node Exporter collectors

    • Improvements to StackLight alerting

    • Metric Collector alerts

  • Documentation:

    • Expanding the capacity of the existing Subnet resources on a running cluster

    • Calculating target ratio for Ceph pools

2.21.1

The Mirantis Container Cloud GA release 2.21.1 is based on 2.21.0 and:

  • Introduces support for the Cluster release 12.5.0 that is based on the Cluster release 11.5.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 22.5.

  • Introduces support for Mirantis Kubernetes Engine 3.5.5 with Kubernetes 1.21 and Mirantis Container Runtime 20.10.13 in the 12.x Cluster release series.

  • Supports the latest Cluster releases 7.11.0 and 11.5.0.

  • Does not support greenfield deployments based on deprecated Cluster releases 11.4.0, 8.10.0, and 7.10.0. Use the latest available Cluster releases of the series instead.

For details about the Container Cloud release 2.21.1, refer to its parent release 2.21.0:

Caution

Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

2.21.0

The Mirantis Container Cloud GA release 2.21.0:

  • Introduces support for the Cluster release 11.5.0 that is based on Mirantis Container Runtime 20.10.13 and Mirantis Kubernetes Engine 3.5.5 with Kubernetes 1.21.

  • Introduces support for the Cluster release 7.11.0 that is based on Mirantis Container Runtime 20.10.13 and Mirantis Kubernetes Engine 3.4.11 with Kubernetes 1.20.

  • Supports the Cluster release 8.10.0 that is based on the Cluster release 7.10.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 22.4.

  • Does not support greenfield deployments on deprecated Cluster releases 11.4.0, 8.8.0, and 7.10.0. Use the latest available Cluster releases of the series instead.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.21.0.

Caution

Container Cloud 2.21.0 requires manual post-upgrade steps. For details, see Post-upgrade actions.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.21.0. For the list of enhancements in the Cluster releases 11.5.0 and 7.11.0 that are introduced by the Container Cloud release 2.21.0, see the Cluster releases (managed).


‘BareMetalHostCredential’ custom resource for bare metal hosts

Implemented the BareMetalHostCredential custom resource to simplify permissions and roles management on a bare metal management, regional, and managed cluster.

Note

For MOSK-based deployments, the feature support is available since MOSK 22.5.

The BareMetalHostCredential object creation triggers the following automatic actions:

  1. Create an underlying Secret object containing data about username and password of the bmc account of the related BareMetalHostCredential object.

  2. Erase sensitive password data of the bmc account from the BareMetalHostCredential object.

  3. Add the created Secret object name to the spec.password.name section of the related BareMetalHostCredential object.

  4. Update BareMetalHost.spec.bmc.credentialsName with the BareMetalHostCredential object name.

Note

When you delete a BareMetalHost object, the related BareMetalHostCredential object is deleted automatically.

Note

On existing clusters, a BareMetalHostCredential object is automatically created for each BareMetalHost object during a cluster update.

Dnsmasq configuration enhancements

Enhanced the logic of the dnsmasq server to listen on the PXE network of the management cluster by using the dhcp-lb Kubernetes Service instead of listening on the PXE interface of one management cluster node.

To configure the DHCP relay service, specify the external address of the dhcp-lb Kubernetes Service as an upstream address for the relayed DHCP requests, which is the IP helper address for DHCP. There is the dnsmasq Deployment behind this service that can only accept relayed DHCP requests.

Container Cloud has its own DHCP relay running on one of the management cluster nodes. That DHCP relay serves for proxying DHCP requests in the same L2 domain where the management cluster nodes are located.

The enhancement comprises deprecation of the dnsmasq.dhcp_range parameter. Use the Subnet object configuration for this purpose instead.

Note

If you configured multiple DHCP ranges before Container Cloud 2.21.0 during the management cluster bootstrap, the DHCP configuration will automatically migrate to Subnet objects after cluster upgrade to 2.21.0.

Caution

Using of custom DNS server addresses for servers that boot over PXE is not supported.

Combining router and seed node settings on one Equinix Metal server

Implemented the ability to combine configuration of a router and seed node on the same server when preparing infrastructure for an Equinix Metal based Container Cloud with private networking using Terraform templates. Set router_as_seed to true in the required Metro configuration while preparing terraform.tfvars to combine both the router and seed node roles.

Graceful machine deletion

TechPreview

Implemented the possibility to safely clean up a node resources using the Container Cloud API before deleting it from a cluster. Using the deletionPolicy: graceful parameter in the providerSpec.value section of the Machine object, the cloud provider controller now prepares a machine for deletion by cordoning, draining, and removing the related node from Docker Swarm. If required, you can abort a machine deletion when using deletionPolicy: graceful, but only before the related node is removed from Docker Swarm.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

Add custom Docker registries using the Container Cloud web UI

Enhanced support for custom Docker registries configuration in management, regional, and managed clusters by adding the Container Registries tab to the Container Cloud web UI. Using this tab, you can configure CA certificates on machines to access private Docker registries.

Note

For MOSK-based deployments, the feature support is available since MOSK 22.5.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.21.0 along with the Cluster releases 11.5.0 and 7.11.0:

  • [23002] Fixed the issue with inability to set a custom value for a predefined node label using the Container Cloud web UI.

  • [26416] Fixed the issue with inability to automatically upload an MKE client bundle during cluster attachment using the Container Cloud web UI.

  • [26740] Fixed the issue with failure to upgrade a management cluster with a Keycloak or web UI TLS custom certificate.

  • [27193] Fixed the issue with missing permissions for the m:kaas:<namespaceName>@member role that are required for the Container Cloud web UI to work properly. The issue relates to reading permissions for resources objects of all providers as well as clusterRelease, unsupportedCluster objects, and so on.

  • [26379] Fixed the issue with missing logs for MOSK-related namespaces when using the container-cloud collect logs command without the --extended flag.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.21.0 including the Cluster releases 11.5.0 and 7.11.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


MKE
[20651] A cluster deployment or update fails with not ready compose deployments

A managed cluster deployment, attachment, or update to a Cluster release with MKE versions 3.3.13, 3.4.6, 3.5.1, or earlier may fail with the compose pods flapping (ready > terminating > pending) and with the following error message appearing in logs:

'not ready: deployments: kube-system/compose got 0/0 replicas, kube-system/compose-api
 got 0/0 replicas'
 ready: false
 type: Kubernetes

Workaround:

  1. Disable Docker Content Trust (DCT):

    1. Access the MKE web UI as admin.

    2. Navigate to Admin > Admin Settings.

    3. In the left navigation pane, click Docker Content Trust and disable it.

  2. Restart the affected deployments such as calico-kube-controllers, compose, compose-api, coredns, and so on:

    kubectl -n kube-system delete deployment <deploymentName>
    

    Once done, the cluster deployment or update resumes.

  3. Re-enable DCT.



Bare metal
[26659] Regional cluster deployment failure with stuck ‘mcc-cache’ Pods

Fixed in 11.6.0

Deployment of a regional cluster based on bare metal or Equinix Metal with private networking fails with mcc-cache Pods being stuck in the CrashLoopBackOff status of restarts.

As a workaround, remove failed mcc-cache Pods to restart them automatically. For example:

kubectl -n kaas delete pod mcc-cache-0
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.

[20736] Region deletion failure after regional deployment failure

If a baremetal-based regional cluster deployment fails before pivoting is done, the corresponding region deletion fails.

Workaround:

Using the command below, manually delete all possible traces of the failed regional cluster deployment, including but not limited to the following objects that contain the kaas.mirantis.com/region label of the affected region:

  • cluster

  • machine

  • baremetalhost

  • baremetalhostprofile

  • l2template

  • subnet

  • ipamhost

  • ipaddr

kubectl delete <objectName> -l kaas.mirantis.com/region=<regionName>

Warning

Do not use the same region name again after the regional cluster deployment failure since some objects that reference the region name may still exist.



Equinix Metal with private networking
[26659] Regional cluster deployment failure with stuck ‘mcc-cache’ Pods

Fixed in 11.6.0

Deployment of a regional cluster based on bare metal or Equinix Metal with private networking fails with mcc-cache Pods being stuck in the CrashLoopBackOff status of restarts.

As a workaround, remove failed mcc-cache Pods to restart them automatically. For example:

kubectl -n kaas delete pod mcc-cache-0

vSphere
[26070] RHEL system cannot be registered in Red Hat portal over MITM proxy

Deployment of RHEL machines using the Red Hat portal registration, which requires user and password credentials, over MITM proxy fails while building the virtual machines template with the following error:

Unable to verify server's identity: [SSL: CERTIFICATE_VERIFY_FAILED]
certificate verify failed (_ssl.c:618)

The Container Cloud deployment gets stuck while applying the RHEL license to machines with the same error in the lcm-agent logs.

As a workaround, use the internal Red Hat Satellite server that a VM can access directly without a MITM proxy.


LCM
[5782] Manager machine fails to be deployed during node replacement

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During replacement of a manager machine, the following problems may occur:

  • The system adds the node to Docker swarm but not to Kubernetes

  • The node Deployment gets stuck with failed RethinkDB health checks

Workaround:

  1. Delete the failed node.

  2. Wait for the MKE cluster to become healthy. To monitor the cluster status:

    1. Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.

    2. Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.

  3. Deploy a new node.

[5568] The calico-kube-controllers Pod fails to clean up resources

Fixed in 2.28.4 (17.3.4 and 16.3.4)

During the unsafe or forced deletion of a manager machine running the calico-kube-controllers Pod in the kube-system namespace, the following issues occur:

  • The calico-kube-controllers Pod fails to clean up resources associated with the deleted node

  • The calico-node Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had

As a workaround, before deletion of the node running the calico-kube-controllers Pod, cordon and drain the node:

kubectl cordon <nodeName>
kubectl drain <nodeName>
[27797] A cluster ‘kubeconfig’ stops working during MKE minor version update

During update of a Container Cloud cluster of any type, if the MKE minor version is updated from 3.4.x to 3.5.x, access to the cluster using the existing kubeconfig fails with the You must be logged in to the server (Unauthorized) error due to OIDC settings being reconfigured.

As a workaround, during the cluster update process, use the admin kubeconfig instead of the existing one. Once the update completes, you can use the existing cluster kubeconfig again.

To obtain the admin kubeconfig:

kubectl --kubeconfig <pathToMgmtKubeconfig> get secret -n <affectedClusterNamespace> \
-o yaml <affectedClusterName>-kubeconfig | awk '/admin.conf/ {print $2}' | \
head -1 | base64 -d > clusterKubeconfig.yaml

If the related cluster is regional, replace <pathToMgmtKubeconfig> with <pathToRegionalKubeconfig>.

[27192] Failure to accept new connections by ‘portforward-controller’

Fixed in 11.6.0 and 12.7.0

During bootstrap of a management or regional cluster of any type, portforward-controller ends accepting new connections after receiving the Accept error: “EOF” error. Hence, nothing is copied between clients.

The workaround below applies only if machines are stuck in the Provision state. Otherwise, contact Mirantis support to further assess the issue.

Workaround:

  1. Verify that machines are stuck in the Provision state for up to 20 minutes or more. For example:

    kubectl --kubeconfig <kindKubeconfigPath> get machines -o wide
    
  2. Verify whether the portforward-controller Pod logs contain {{Accept error: “EOF”}} and {{Stopped forwarding}}:

    kubectl --kubeconfig <kindKubeconfigPath> -n kaas logs -lapp.kubernetes.io/name=portforward-controller | grep 'Accept error: "EOF"'
    
    kubectl --kubeconfig <kindKubeconfigPath> -n kaas logs -lapp.kubernetes.io/name=portforward-controller | grep 'Stopped forwarding'
    
  3. Select from the following options:

    • If the errors mentioned in the previous step are present:

      1. Restart the portforward-controller Deployment:

        kubectl --kubeconfig <kindKubeconfigPath> -n kaas rollout restart deploy portforward-controller
        
      2. Monitor the states of machines and the portforward-controller Pod logs. If the errors recur, restart the portforward-controller Deployment again.

    • If the errors mentioned in the previous step are not present, contact Mirantis support to further assess the issue.


StackLight
[29329] Recreation of the Patroni container replica is stuck

Fixed in 11.7.0 and 12.7.0

During an update of a Container Cloud cluster of any type, recreation of the Patroni container replica is stuck in the degraded state due to the liveness probe killing the container that runs the pg_rewind procedure. The issue affects clusters on which the pg_rewind procedure takes more time than the full cycle of the liveness probe.

The sample logs of the affected cluster:

INFO: doing crash recovery in a single user mode
ERROR: Crash recovery finished with code=-6
INFO:  stdout=
INFO:  stderr=2023-01-11 10:20:34 GMT [64]: [1-1] 63be8d72.40 0     LOG:  database system was interrupted; last known up at 2023-01-10 17:00:59 GMT
[64]: [2-1] 63be8d72.40 0  LOG:  could not read from log segment 00000002000000000000000F, offset 0: read 0 of 8192
[64]: [3-1] 63be8d72.40 0  LOG:  invalid primary checkpoint record
[64]: [4-1] 63be8d72.40 0  PANIC:  could not locate a valid checkpoint record

Workaround:

For the affected replica and PVC, run:

kubectl delete persistentvolumeclaim/storage-volume-patroni-<replica-id> -n stacklight

kubectl delete pod/patroni-<replica-id> -n stacklight
[28526] CPU throttling for ‘kaas-exporter’ blocking metric collection

Fixed in 11.6.0 and 12.7.0

A low CPU limit 100m for kaas-exporter blocks metric collection.

As a workaround, increase the CPU limit for kaas-exporter to 500m on the management cluster in the spec:providerSpec:value:kaas:management:helmReleases: section as described in {{ mos_name_abbr }} documentation: Underlay Kubernetes operations - Increase memory limits for cluster components.

[28479] Increase of the ‘metric-collector’ Pod restarts due to OOM

Fixed in 11.7.0 and 12.7.0

On the baremetal-based management clusters, the restarts count of the metric-collector Pod is increased in time with reason: OOMKilled in the containerStatuses of the metric-collector Pod. Only clusters with HTTP proxy enabled are affected.

Such behavior is expected. Therefore, disregard these restarts.

[28134] Failure to update a cluster with nodes in the ‘Prepare’ state

Fixed in 11.6.0 and 12.7.0

A Container Cloud cluster of any type fails to update with nodes being stuck in the Prepare state and the following example error in Conditions of the affected machine:

Error: error when evicting pods/"patroni-13-2" -n "stacklight": global timeout reached: 10m0s

Other symptoms of the issue are as follows:

  • One of the Patroni Pods has 2/3 of containers ready. For example:

    kubectl get po -n stacklight -l app=patroni
    
    NAME           READY   STATUS    RESTARTS   AGE
    patroni-13-0   3/3     Running   0          32h
    patroni-13-1   3/3     Running   0          38h
    patroni-13-2   2/3     Running   0          38h
    
  • The patroni-patroni-exporter container from the affected Pod is not ready. For example:

    kubectl get pod/patroni-13-2 -n stacklight -o jsonpath='{.status.containerStatuses[?(@.name=="patroni-patroni-exporter")].ready}'
    
    false
    

As a workaround, restart the patroni-patroni-exporter container of the affected Patroni Pod:

kubectl exec <affectedPatroniPodName> -n stacklight -c patroni-patroni-exporter -- kill 1

For example:

kubectl exec patroni-13-2 -n stacklight -c patroni-patroni-exporter -- kill 1
[27732-1] OpenSearch PVC size custom settings are dismissed during deployment

Fixed in 11.6.0 and 12.7.0

The OpenSearch elasticsearch.persistentVolumeClaimSize custom setting is overwritten by logging.persistentVolumeClaimSize during deployment of a Container Cloud cluster of any type and is set to the default 30Gi.

Note

This issue does not block the OpenSearch cluster operations if the default retention time is set. The default setting is usually enough for the capacity size of this cluster.

The issue may affect the following Cluster releases:

  • 11.2.0 - 11.5.0

  • 7.8.0 - 7.11.0

  • 8.8.0 - 8.10.0, 12.5.0 (MOSK clusters)

  • 10.2.4 - 10.8.1 (attached MKE 3.4.x clusters)

  • 13.0.2 - 13.5.1 (attached MKE 3.5.x clusters)

To verify that the cluster is affected:

Note

In the commands below, substitute parameters enclosed in angle brackets to match the affected cluster values.

kubectl --kubeconfig=<managementClusterKubeconfigPath> \
-n <affectedClusterProjectName> \
get cluster <affectedClusterName> \
-o=jsonpath='{.spec.providerSpec.value.helmReleases[*].values.elasticsearch.persistentVolumeClaimSize}' | xargs echo config size:


kubectl --kubeconfig=<affectedClusterKubeconfigPath> \
-n stacklight get pvc -l 'app=opensearch-master' \
-o=jsonpath="{.items[*].status.capacity.storage}" | xargs echo capacity sizes:
  • The cluster is not affected if the configuration size value matches or is less than any capacity size. For example:

    config size: 30Gi
    capacity sizes: 30Gi 30Gi 30Gi
    
    config size: 50Gi
    capacity sizes: 100Gi 100Gi 100Gi
    
  • The cluster is affected if the configuration size is larger than any capacity size. For example:

    config size: 200Gi
    capacity sizes: 100Gi 100Gi 100Gi
    

Workaround for a new cluster creation:

  1. Select from the following options:

    • For a management or regional cluster, during the bootstrap procedure, open cluster.yaml.template for editing.

    • For a managed cluster, open the Cluster object for editing.

      Caution

      For a managed cluster, use the Container Cloud API instead of the web UI for cluster creation.

  2. In the opened .yaml file, add logging.persistentVolumeClaimSize along with elasticsearch.persistentVolumeClaimSize. For example:

    apiVersion: cluster.k8s.io/v1alpha1
    spec:
    ...
      providerSpec:
        value:
        ...
          helmReleases:
          - name: stacklight
            values:
              elasticsearch:
                persistentVolumeClaimSize: 100Gi
              logging:
                enabled: true
                persistentVolumeClaimSize: 100Gi
    
  3. Continue the cluster deployment. The system will use the custom value set in logging.persistentVolumeClaimSize.

    Caution

    If elasticsearch.persistentVolumeClaimSize is absent in the .yaml file, the Admission Controller blocks the configuration update.

Workaround for an existing cluster:

Caution

During the application of the below workarounds, a short outage of OpenSearch and its dependent components may occur with the following alerts firing on the cluster. This behavior is expected. Therefore, disregard these alerts.

StackLight alerts list firing during cluster update

Cluster size and outage probability level

Alert name

Label name and component

Any cluster with high probability

KubeStatefulSetOutage

statefulset=opensearch-master

KubeDeploymentOutage

  • deployment=opensearch-dashboards

  • deployment=metricbeat

Large cluster with average probability

KubePodsNotReady Removed in 17.0.0, 16.0.0, and 14.1.0

  • created_by_name="opensearch-master*"

  • created_by_name="opensearch-dashboards*"

  • created_by_name="metricbeat-*"

OpenSearchClusterStatusWarning

n/a

OpenSearchNumberOfPendingTasks

n/a

OpenSearchNumberOfInitializingShards

n/a

OpenSearchNumberOfUnassignedShards Removed in 2.27.0 (17.2.0 and 16.2.0)

n/a

Any cluster with low probability

KubeStatefulSetReplicasMismatch

statefulset=opensearch-master

KubeDeploymentReplicasMismatch

  • deployment=opensearch-dashboards

  • deployment=metricbeat

StackLight in HA mode with LVP provisioner for OpenSearch PVCs

Warning

After applying this workaround, the existing log data will be lost. Therefore, if required, migrate log data to a new persistent volume (PV).

  1. Move the existing log data to a new PV, if required.

  2. Increase the disk size for local volume provisioner (LVP).

  3. Scale down the opensearch-master StatefulSet with dependent resources to 0 and disable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 0 statefulset opensearch-master
    
    kubectl -n stacklight scale --replicas 0 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 0 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : true }}'
    
  4. Recreate the opensearch-master StatefulSet with the updated disk size.

    kubectl get statefulset opensearch-master -o yaml -n stacklight | sed 's/storage: 30Gi/storage: <pvcSize>/g' > opensearch-master.yaml
    
    kubectl -n stacklight delete statefulset opensearch-master
    
    kubectl create -f opensearch-master.yaml
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  5. Delete existing PVCs:

    kubectl delete pvc -l 'app=opensearch-master' -n stacklight
    

    Warning

    This command removes all existing logs data from PVCs.

  6. In the Cluster configuration, set the same logging.persistentVolumeClaimSize as the size of elasticsearch.persistentVolumeClaimSize. For example:

    apiVersion: cluster.k8s.io/v1alpha1
    kind: Cluster
    spec:
    ...
      providerSpec:
        value:
        ...
          helmReleases:
          - name: stacklight
            values:
              elasticsearch:
                persistentVolumeClaimSize: 100Gi
              logging:
                enabled: true
                persistentVolumeClaimSize: 100Gi
    
  7. Scale up the opensearch-master StatefulSet with dependent resources and enable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 3 statefulset opensearch-master
    
    sleep 100
    
    kubectl -n stacklight scale --replicas 1 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 1 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : false }}'
    
StackLight in non-HA mode with an expandable StorageClass for OpenSearch PVCs

Note

To verify whether a StorageClass is expandable:

kubectl -n stacklight get pvc | grep opensearch-master | awk '{print $6}' | xargs -I{} kubectl get storageclass {} -o yaml | grep 'allowVolumeExpansion: true'

A positive system response is allowVolumeExpansion: true. A negative system response is blank or false.

  1. Scale down the opensearch-master StatefulSet with dependent resources to 0 and disable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 0 statefulset opensearch-master
    
    kubectl -n stacklight scale --replicas 0 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 0 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : true }}'
    
  2. Recreate the opensearch-master StatefulSet with the updated disk size.

    kubectl -n stacklight get statefulset opensearch-master -o yaml | sed 's/storage: 30Gi/storage: <pvc_size>/g' > opensearch-master.yaml
    
    kubectl -n stacklight delete statefulset opensearch-master
    
    kubectl create -f opensearch-master.yaml
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  3. Patch the PVCs with the new elasticsearch.persistentVolumeClaimSize value:

    kubectl -n stacklight patch pvc opensearch-master-opensearch-master-0 -p  '{ "spec": { "resources": { "requests": { "storage": "<pvc_size>" }}}}'
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  4. In the Cluster configuration, set logging.persistentVolumeClaimSize the same as the size of elasticsearch.persistentVolumeClaimSize. For example:

     apiVersion: cluster.k8s.io/v1alpha1
     kind: Cluster
     spec:
     ...
       providerSpec:
         value:
         ...
           helmReleases:
           - name: stacklight
             values:
               elasticsearch:
                 persistentVolumeClaimSize: 100Gi
               logging:
                 enabled: true
                 persistentVolumeClaimSize: 100Gi
    
  5. Scale up the opensearch-master StatefulSet with dependent resources to 1 and enable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 1 statefulset opensearch-master
    
    sleep 100
    
    kubectl -n stacklight scale --replicas 1 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 1 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : false }}'
    
StackLight in non-HA mode with a non-expandable StorageClass and no LVP for OpenSearch PVCs

Warning

After applying this workaround, the existing log data will be lost. Depending on your custom provisioner, you may find a third-party tool, such as pv-migrate, that provides a possibility to copy all data from one PV to another.

If data loss is acceptable, proceed with the workaround below.

Note

To verify whether a StorageClass is expandable:

kubectl -n stacklight get pvc | grep opensearch-master | awk '{print $6}' | xargs -I{} kubectl get storageclass {} -o yaml | grep 'allowVolumeExpansion: true'

A positive system response is allowVolumeExpansion: true. A negative system response is blank or false.

  1. Scale down the opensearch-master StatefulSet with dependent resources to 0 and disable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 0 statefulset opensearch-master
    
    kubectl -n stacklight scale --replicas 0 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 0 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : true }}'
    
  2. Recreate the opensearch-master StatefulSet with the updated disk size:

    kubectl get statefulset opensearch-master -o yaml -n stacklight | sed 's/storage: 30Gi/storage: <<pvc_size>>/g' > opensearch-master.yaml
    
    kubectl -n stacklight delete statefulset opensearch-master
    
    kubectl create -f opensearch-master.yaml
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  3. Delete existing PVCs:

    kubectl delete pvc -l 'app=opensearch-master' -n stacklight
    

    Warning

    This command removes all existing logs data from PVCs.

  4. In the Cluster configuration, set logging.persistentVolumeClaimSize to the same value as the size of the elasticsearch.persistentVolumeClaimSize parameter. For example:

     apiVersion: cluster.k8s.io/v1alpha1
     kind: Cluster
     spec:
     ...
       providerSpec:
         value:
         ...
           helmReleases:
           - name: stacklight
             values:
               elasticsearch:
                 persistentVolumeClaimSize: 100Gi
               logging:
                 enabled: true
                 persistentVolumeClaimSize: 100Gi
    
  5. Scale up the opensearch-master StatefulSet with dependent resources to 1 and enable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 1 statefulset opensearch-master
    
    sleep 100
    
    kubectl -n stacklight scale --replicas 1 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 1 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : false }}'
    
[27732-2] Custom settings for ‘elasticsearch.logstashRetentionTime’ are dismissed

Fixed in 11.6.0 and 12.7.0

Custom settings for the deprecated elasticsearch.logstashRetentionTime parameter are overwritten by the default setting set to 1 day.

The issue may affect the following Cluster releases with enabled elasticsearch.logstashRetentionTime:

  • 11.2.0 - 11.5.0

  • 7.8.0 - 7.11.0

  • 8.8.0 - 8.10.0, 12.5.0 (MOSK clusters)

  • 10.2.4 - 10.8.1 (attached MKE 3.4.x clusters)

  • 13.0.2 - 13.5.1 (attached MKE 3.5.x clusters)

As a workaround, in the Cluster object, replace elasticsearch.logstashRetentionTime with elasticsearch.retentionTime that was implemented to replace the deprecated parameter. For example:

apiVersion: cluster.k8s.io/v1alpha1
kind: Cluster
spec:
  ...
  providerSpec:
    value:
    ...
      helmReleases:
      - name: stacklight
        values:
          elasticsearch:
            retentionTime:
              logstash: 10
              events: 10
              notifications: 10
          logging:
            enabled: true

For the StackLight configuration procedure and parameters description, refer to Configure StackLight.

[20876] StackLight pods get stuck with the ‘NodeAffinity failed’ error

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshoot StackLight.

On a managed cluster, the StackLight pods may get stuck with the Pod predicate NodeAffinity failed error in the pod status. The issue may occur if the StackLight node label was added to one machine and then removed from another one.

The issue does not affect the StackLight services, all required StackLight pods migrate successfully except extra pods that are created and stuck during pod migration.

As a workaround, remove the stuck pods:

kubectl --kubeconfig <managedClusterKubeconfig> -n stacklight delete pod <stuckPodName>

Storage
[28783] Ceph conditon stuck in absence of Ceph cluster secrets info

Fixed in 11.6.0 and 12.7.0

Ceph conditon gets stuck in absence of the Ceph cluster secrets information. The observed behaviour can be found on the MOSK 22.3 clusters running on top of Container Cloud 2.21.

The list of the symptoms includes:

  • The Cluster object contains the following condition:

    Failed to configure Ceph cluster: ceph cluster status info is not \
    updated at least for 5 minutes, ceph cluster secrets info is not available yet
    
  • The ceph-kcc-controller logs from the kaas namespace contain the following loglines:

    2022-11-30 19:39:17.393595 E | ceph-spec: failed to update cluster condition to \
    {Type:Ready Status:True Reason:ClusterCreated Message:Cluster created successfully \
    LastHeartbeatTime:2022-11-30 19:39:17.378401993 +0000 UTC m=+2617.717554955 \
    LastTransitionTime:2022-05-16 16:14:37 +0000 UTC}. failed to update object \
    "rook-ceph/rook-ceph" status: Operation cannot be fulfilled on \
    cephclusters.ceph.rook.io "rook-ceph": the object has been modified; please \
    apply your changes to the latest version and try again
    

Workaround:

  1. Edit KaaSCephCluster of the affected managed cluster:

    kubectl -n <managedClusterProject> edit kaascephcluster
    

    Substitute <managedClusterProject> with the corresponding managed cluster namespace.

  2. Define the version parameter in the KaaSCephCluster spec:

    spec:
      cephClusterSpec:
        version: 15.2.13
    

    Note

    Starting from MOSK 22.4, the Ceph cluster version updates to 15.2.17. Therefore, remove the version parameter definition from KaaSCephCluster after the managed cluster update.

    Save the updated KaaSCephCluster spec.

  3. Find the MiraCeph Custom Resource on a managed cluster and copy all annotations starting with meta.helm.sh:

    kubectl --kubeconfig <managedClusterKubeconfig> get crd miracephs.lcm.mirantis.com -o yaml
    

    Substitute <managedClusterKubeconfig> with a corresponding managed cluster kubeconfig.

    Example of a system output:

    apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    metadata:
      annotations:
        controller-gen.kubebuilder.io/version: v0.6.0
        # save all annotations with "meta.helm.sh" somewhere
        meta.helm.sh/release-name: ceph-controller
        meta.helm.sh/release-namespace: ceph
    ...
    
  4. Create the miracephsecretscrd.yaml file and fill it with the following template:

    apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    metadata:
      annotations:
        controller-gen.kubebuilder.io/version: v0.6.0
        <insert all "meta.helm.sh" annotations here>
      labels:
        app.kubernetes.io/managed-by: Helm
      name: miracephsecrets.lcm.mirantis.com
    spec:
      conversion:
        strategy: None
      group: lcm.mirantis.com
      names:
        kind: MiraCephSecret
        listKind: MiraCephSecretList
        plural: miracephsecrets
        singular: miracephsecret
      scope: Namespaced
      versions:
        - name: v1alpha1
          schema:
            openAPIV3Schema:
              description: MiraCephSecret aggregates secrets created by Ceph
              properties:
                apiVersion:
                  type: string
                kind:
                  type: string
                metadata:
                  type: object
                status:
                  properties:
                    lastSecretCheck:
                      type: string
                    lastSecretUpdate:
                      type: string
                    messages:
                      items:
                        type: string
                      type: array
                    state:
                      type: string
                  type: object
              type: object
          served: true
          storage: true
    

    Insert the copied meta.helm.sh annotations to the metadata.annotations section of the template.

  5. Apply miracephsecretscrd.yaml on the managed cluster:

    kubectl --kubeconfig <managedClusterKubeconfig> apply -f miracephsecretscrd.yaml
    

    Substitute <managedClusterKubeconfig> with a corresponding managed cluster kubeconfig.

  6. Obtain the MiraCeph name from the managed cluster:

    kubectl --kubeconfig <managedClusterKubeconfig> -n ceph-lcm-mirantis get miraceph -o name
    

    Substitute <managedClusterKubeconfig> with the corresponding managed cluster kubeconfig.

    Example of a system output:

    miraceph.lcm.mirantis.com/rook-ceph
    

    Copy the MiraCeph name after slash, the rook-ceph part from the example above.

  7. Create the mcs.yaml file and fill it with the following template:

    apiVersion: lcm.mirantis.com/v1alpha1
    kind: MiraCephSecret
    metadata:
      name: <miracephName>
      namespace: ceph-lcm-mirantis
    status: {}
    

    Substitute <miracephName> with the MiraCeph name from the previous step.

  8. Apply mcs.yaml on the managed cluster:

    kubectl --kubeconfig <managedClusterKubeconfig> apply -f mcs.yaml
    

    Substitute <managedClusterKubeconfig> with a corresponding managed cluster kubeconfig.

After some delay, the cluster condition will be updated to the health state.

[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.

Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.21.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.34.16

aws-credentials-controller

1.34.16

Azure Updated

azure-provider

1.34.16

azure-credentials-controller

1.34.16

Bare metal

ambassador

1.20.1-alpine

baremetal-operator Updated

base-focal-20220611131433

baremetal-public-api Updated

1.34.16

baremetal-provider Updated

1.34.16

baremetal-resource-controller

base-focal-20220627134752

ironic

yoga-focal-20220719132049

kaas-ipam

base-focal-20220503165133

keepalived Updated

0.19.0-5-g6a7e17d

local-volume-provisioner Updated

2.4.0

mariadb

10.4.17-bionic-20220113085105

metallb-controller Updated

0.13.4 0

IAM

iam Updated

2.4.35

iam-controller Updated

1.34.16

keycloak

18.0.0

Container Cloud Updated

admission-controller

1.34.16

agent-controller

1.34.16

byo-credentials-controller

1.34.16

byo-provider

1.34.16

ceph-kcc-controller

1.34.16

cert-manager

1.34.16

client-certificate-controller

1.34.16

event-controller

1.34.16

golang

1.18.5

kaas-public-api

1.34.16

kaas-exporter

1.34.16

kaas-ui

1.34.16

license-controller

1.34.16

lcm-controller

0.3.0-327-gbc30b11b

machinepool-controller

1.34.16

mcc-cache

1.34.16

metrics-server

0.5.2

portforward-controller

1.34.16

proxy-controller

1.34.16

rbac-controller

1.34.16

release-controller

1.34.16

rhellicense-controller

1.34.16

scope-controller

1.34.16

user-controller

1.34.16

Equinix Metal Updated

equinix-provider

1.34.16

equinix-credentials-controller

1.34.16

keepalived

0.19.0-5-g6a7e17d

OpenStack Updated

openstack-provider

1.34.16

os-credentials-controller

1.34.16

VMware vSphere Updated

metallb-controller Updated

0.13.4

vsphere-provider

1.34.16

vsphere-credentials-controller

1.34.16

keepalived

0.19.0-5-g6a7e17d

squid-proxy

0.0.1-7

0

For MOSK-based deployments, the metallb-controller version is updated from 0.12.1 to 0.13.4 in MOSK 22.5.

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.21.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-api Updated

https://binary.mirantis.com/core/helm/baremetal-api-1.34.16.tgz

baremetal-operator Updated

https://binary.mirantis.com/core/helm/baremetal-operator-1.34.17.tgz

baremetal-public-api Updated

https://binary.mirantis.com/core/helm/baremetal-public-api-1.34.16.tgz

ironic-python-agent.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20220915111547

ironic-python-agent.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20220915111547

kaas-ipam Updated

https://binary.mirantis.com/core/helm/kaas-ipam-1.34.16.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.34.16.tgz

local-volume-provisioner Updated

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.34.16.tgz

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-104-6e2e82c.tgz

Docker images

ambassador

mirantis.azurecr.io/general/external/docker.io/library/nginx:1.20.1-alpine

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-focal-20220611131433

baremetal-resource-controller

mirantis.azurecr.io/bm/baremetal-resource-controller:base-focal-20220627134752

dynamic_ipxe Updated

mirantis.azurecr.io/bm/dynamic-ipxe:base-focal-20221018205745

dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-alpine-20221025105458

dnsmasq-controller Updated

mirantis.azurecr.io/bm/dnsmasq-controller:base-focal-20220811133223

ironic

mirantis.azurecr.io/openstack/ironic:yoga-focal-20220719132049

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:yoga-focal-20220719132049

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20220602121226

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-focal-20220503165133

mariadb

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20220113085105

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.19.0-5-g6a7e17d

metallb-controller Updated 0

mirantis.azurecr.io/bm/external/metallb/controller:v0.13.4

metallb-speaker Updated 0

mirantis.azurecr.io/bm/external/metallb/speaker:v0.13.4

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-focal-20220128103433

0(1,2)

For MOSK-based deployments, the metallb version is updated from 0.12.1 to 0.13.4 in MOSK 22.5.


Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.34.16.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.34.16.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.34.16.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.34.16.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.34.16.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.34.16.tgz

azure-credentials-controller

https://binary.mirantis.com/core/helm/azure-credentials-controller-1.34.16.tgz

azure-provider

https://binary.mirantis.com/core/helm/azure-provider-1.34.16.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.34.16.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.34.16.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.34.16.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.34.16.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.34.16.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.34.16.tgz

configuration-collector New

https://binary.mirantis.com/core/helm/configuration-collector-1.34.16.tgz

equinix-credentials-controller

https://binary.mirantis.com/core/helm/equinix-credentials-controller-1.34.16.tgz

equinix-provider

https://binary.mirantis.com/core/helm/equinix-provider-1.34.16.tgz

equinixmetalv2-provider

https://binary.mirantis.com/core/helm/equinixmetalv2-provider-1.34.16.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.34.16.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.34.16.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.34.16.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.34.16.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.34.16.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.34.16.tgz

license-controller

https://binary.mirantis.com/core/helm/license-controller-1.34.16.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.34.16.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.34.16.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.34.16.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.34.16.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.34.16.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.34.16.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.34.16.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.34.16.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.34.16.tgz

scope-controller

http://binary.mirantis.com/core/helm/scope-controller-1.34.16.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.34.16.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.34.16.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.34.16.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.34.16.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.34.16

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.34.16

aws-cluster-api-controller Updated

mirantis.azurecr.io/core/aws-cluster-api-controller:1.34.16

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.34.16

azure-cluster-api-controller Updated

mirantis.azurecr.io/core/azure-cluster-api-controller:1.34.16

azure-credentials-controller Updated

mirantis.azurecr.io/core/azure-credentials-controller:1.34.16

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.34.16

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.34.16

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.34.16

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.6.1

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.34.16

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.34.16

cluster-api-provider-equinix Updated

mirantis.azurecr.io/core/equinix-cluster-api-controller:1.34.16

equinix-credentials-controller Updated

mirantis.azurecr.io/core/equinix-credentials-controller:1.34.16

frontend Updated

mirantis.azurecr.io/core/frontend:1.34.16

haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.17.0-8-g6ca89d5

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.34.16

kaas-exporter

mirantis.azurecr.io/core/kaas-exporter:1.34.16

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.34.16

lcm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-327-gbc30b11b

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.34.16

machinepool-controller Updated

mirantis.azurecr.io/core/machinepool-controller:1.34.16

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.19.0-5-g6a7e17d

metrics-server

mirantis.azurecr.io/core/external/metrics-server:v0.5.2

nginx

mirantis.azurecr.io/core/external/nginx:1.34.16

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.34.16

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.34.16

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.34.16

proxy-controller Updated

mirantis.azurecr.io/core/proxy-controller:1.34.16

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.34.16

registry

mirantis.azurecr.io/lcm/registry:2.7.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.34.16

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.34.16

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.34.16

squid-proxy Updated

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-7

storage-discovery

mirantis.azurecr.io/core/storage-discovery:1.34.16

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.34.16

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.34.16

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.34.16


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

Helm charts Updated

iam

http://binary.mirantis.com/iam/helm/iam-2.4.35.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.13.tgz

keycloak_proxy

http://binary.mirantis.com/core/helm/keycloak_proxy-1.34.16.tgz

Docker images

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.0-20200311160233

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.6.7-focal-20220811085105

keycloak Updated

mirantis.azurecr.io/iam/keycloak:0.5.12

keycloak-gatekeeper Updated

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-3

Post-upgrade actions

Since Kubernetes policy does not allow updating images in existing IAM jobs, after Container Cloud automatically upgrades to 2.21.0, update the MariaDB image manually using the following steps:

  1. Delete the existing job:

    kubectl delete job -n kaas iam-cluster-wait
    
  2. In the management Cluster object, and add following snippet:

    kaas:
      management:
        enabled: true
        helmReleases:
        - name: iam
          values:
            keycloak:
              mariadb:
                images:
                  tags:
                    mariadb_scripted_test: general/mariadb:10.6.7-focal-20220811085105
    

    Wait until helm-controller applies changes.

  3. Verify that the job was recreated and the new image was added:

    kubectl describe job -n kaas iam-cluster-wait | grep -i image
    
2.20.1

The Mirantis Container Cloud GA release 2.20.1 is based on 2.20.0 and:

  • Introduces support for the Cluster release 8.10.0 that is based on the Cluster release 7.10.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 22.4.

    This Cluster release is based on the updated version of Mirantis Kubernetes Engine 3.4.10 with Kubernetes 1.20 and Mirantis Container Runtime 20.10.12.

  • Supports the latest Cluster releases 7.10.0 and 11.4.0.

  • Does not support greenfield deployments based on deprecated Cluster releases 11.3.0, 8.8.0, and 7.9.0. Use the latest available Cluster releases of the series instead.

For details about the Container Cloud release 2.20.1, refer to its parent release 2.20.0:

Caution

Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

2.20.0

The Mirantis Container Cloud GA release 2.20.0:

  • Introduces support for the Cluster release 11.4.0 that is based on Mirantis Container Runtime 20.10.12 and Mirantis Kubernetes Engine 3.5.4 with Kubernetes 1.21.

  • Introduces support for the Cluster release 7.10.0 that is based on Mirantis Container Runtime 20.10.12 and Mirantis Kubernetes Engine 3.4.10 with Kubernetes 1.20.

  • Supports the Cluster release 8.8.0 that is based on the Cluster release 7.8.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 22.3.

  • Does not support greenfield deployments on deprecated Cluster releases 11.3.0, 8.6.0, and 7.9.0. Use the latest available Cluster releases of the series instead.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.20.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.20.0. For the list of enhancements in the Cluster releases 11.4.0 and 7.10.0 that are introduced by the Container Cloud release 2.20.0, see the Cluster releases (managed).


IAM ‘member’ role

Added the IAM member role to the existing IAM roles list. The Infrastructure Operator with the member role has the read and write access to Container Cloud API allowing cluster operations and does not have access to IAM objects.

Bastion node configuration for OpenStack and AWS manged clusters

Implemented the capability to configure the Bastion node on greenfield deployments of the OpenStack-based and AWS-based managed clusters using the Container Cloud web UI. Using the Create Cluster wizard, you can now configure the following parameters for the Bastion node:

  • OpenStack-based: flavor, image, availability zone, server metadata, booting from a volume

  • AWS-based: instance type, AMI ID

Note

Reconfiguration of the Bastion node on an existing cluster is not supported.

Mandatory IPAM service label for bare metal LCM subnets

Made the ipam/SVC-k8s-lcm label mandatory for the LCM subnet on new deployments of management and managed bare metal clusters. It allows the LCM Agent to correctly identify IP addresses to use on multi-homed bare metal hosts. Therefore, you must add this label explicitly on new clusters.

Each node of every cluster must now have only one IP address in the LCM network that is allocated from one of the Subnet objects having the ipam/SVC-k8s-lcm label defined. Therefore, all Subnet objects used for LCM networks must have the ipam/SVC-k8s-lcm label defined.

Note

For MOSK-based deployments, the feature support is available since MOSK 22.4.

Flexible size units for bare metal host profiles

Implemented the possibility to use flexible size units throughout bare metal host profiles for management, regional, and managed clusters. For example, you can now use either sizeGiB: 0.1 or size: 100Mi when specifying a device size. The size without units is counted in bytes. For example, size: 120 means 120 bytes.

Caution

Mirantis recommends using only one parameter name type and units throughout the configuration files. If both sizeGiB and size are used, sizeGiB is ignored during deployment and the suffix is adjusted accordingly. For example, 1.5Gi will be serialized as 1536Mi. The size without units is counted in bytes. For example, size: 120 means 120 bytes.

Note

For MOSK-based deployments, the feature support is available since MOSK 22.4.

General availability support for MITM proxy

Completed integration of the man-in-the-middle (MITM) proxies support for offline deployments by adding AWS, vSphere, and Equinix Metal with private networking to the list of existing supported providers: OpenStack and bare metal.

With trusted proxy CA certificates that you can now add using the CA Certificate check box in the Add new Proxy window during a managed cluster creation, the feature allows monitoring all cluster traffic for security and audit purposes.

Note

  • For Azure and Equinix Metal with public networking, the feature is not supported

  • For MOSK-based deployments, the feature support will become available in one of the following Container Cloud releases.

Configuration of TLS certificates for ‘mcc-cache’ and MKE

Implemented the ability to configure TLS certificates for mcc-cache on management or regional clusters and for MKE on managed clusters deployed or updated by Container Cloud using the latest Cluster release.

Note

TLS certificates configuration for MKE is not supported:

  • For MOSK-based clusters

  • For attached MKE clusters that were not originally deployed by Container Cloud

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added a document on how to increase the overall storage size for all Ceph pools of the same device class: hdd, ssd, or nvme. For details, see Increase Ceph cluster storage size.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.20.0 along with the Cluster releases 11.4.0 and 7.10.0:

  • [25476] Fixed the timeout behavior to avoid Keepalived and HAProxy check failures.

  • [25076] Fixed the remote_syslog configuration. Now, you can optionally define SSL verification modes. For details, see MOSK Operations Guide: StackLight configuration parameters - Logging to syslog.

  • [24927] Fixed the issue wherein a failure to create lcmclusterstate did not trigger a retry.

  • [24852] Fixed the issue wherein the Upgrade Schedule tab in the Container Cloud web UI was displaying the NOT ALLOWED label instead of ALLOWED if the upgrade was enabled.

  • [24837] Fixed the issue wherein some Keycloak iam-keycloak-* pods were in the CrashLoopBackOff state during an update of a baremetal-based management or managed cluster with enabled FIPs.

  • [24813] Fixed the issue wherein the IPaddr objects were not reconciled after the ipam/SVC-* labels changed on the parent subnet. This prevented the ipam/SVC-* labels from propagating to IPaddr objects and caused the serviceMap update to fail in the corresponding IpamHost.

  • [23125] Fixed the issue wherein an OpenStack-based regional cluster creation in an offline mode was failing. Adding the Kubernetes load balancer address to the NO_PROXY environment variable is no longer required.

  • [22576] Fixed the issue wherein provisioning-ansible did not use the wipe flags during the deployment phase.

  • [5238] Improved the Bastion readiness checks to avoid issues with some clusters having several Bastion nodes.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.20.0 including the Cluster releases 11.4.0 and 7.10.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


MKE
[20651] A cluster deployment or update fails with not ready compose deployments

A managed cluster deployment, attachment, or update to a Cluster release with MKE versions 3.3.13, 3.4.6, 3.5.1, or earlier may fail with the compose pods flapping (ready > terminating > pending) and with the following error message appearing in logs:

'not ready: deployments: kube-system/compose got 0/0 replicas, kube-system/compose-api
 got 0/0 replicas'
 ready: false
 type: Kubernetes

Workaround:

  1. Disable Docker Content Trust (DCT):

    1. Access the MKE web UI as admin.

    2. Navigate to Admin > Admin Settings.

    3. In the left navigation pane, click Docker Content Trust and disable it.

  2. Restart the affected deployments such as calico-kube-controllers, compose, compose-api, coredns, and so on:

    kubectl -n kube-system delete deployment <deploymentName>
    

    Once done, the cluster deployment or update resumes.

  3. Re-enable DCT.



Bare metal
[26659] Regional cluster deployment failure with stuck ‘mcc-cache’ Pods

Fixed in 11.6.0

Deployment of a regional cluster based on bare metal or Equinix Metal with private networking fails with mcc-cache Pods being stuck in the CrashLoopBackOff status of restarts.

As a workaround, remove failed mcc-cache Pods to restart them automatically. For example:

kubectl -n kaas delete pod mcc-cache-0
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.

[20736] Region deletion failure after regional deployment failure

If a baremetal-based regional cluster deployment fails before pivoting is done, the corresponding region deletion fails.

Workaround:

Using the command below, manually delete all possible traces of the failed regional cluster deployment, including but not limited to the following objects that contain the kaas.mirantis.com/region label of the affected region:

  • cluster

  • machine

  • baremetalhost

  • baremetalhostprofile

  • l2template

  • subnet

  • ipamhost

  • ipaddr

kubectl delete <objectName> -l kaas.mirantis.com/region=<regionName>

Warning

Do not use the same region name again after the regional cluster deployment failure since some objects that reference the region name may still exist.



Equinix Metal with private networking
[26659] Regional cluster deployment failure with stuck ‘mcc-cache’ Pods

Fixed in 11.6.0

Deployment of a regional cluster based on bare metal or Equinix Metal with private networking fails with mcc-cache Pods being stuck in the CrashLoopBackOff status of restarts.

As a workaround, remove failed mcc-cache Pods to restart them automatically. For example:

kubectl -n kaas delete pod mcc-cache-0

vSphere
[26070] RHEL system cannot be registered in Red Hat portal over MITM proxy

Deployment of RHEL machines using the Red Hat portal registration, which requires user and password credentials, over MITM proxy fails while building the virtual machines template with the following error:

Unable to verify server's identity: [SSL: CERTIFICATE_VERIFY_FAILED]
certificate verify failed (_ssl.c:618)

The Container Cloud deployment gets stuck while applying the RHEL license to machines with the same error in the lcm-agent logs.

As a workaround, use the internal Red Hat Satellite server that a VM can access directly without a MITM proxy.


StackLight
[28526] CPU throttling for ‘kaas-exporter’ blocking metric collection

Fixed in 11.6.0 and 12.7.0

A low CPU limit 100m for kaas-exporter blocks metric collection.

As a workaround, increase the CPU limit for kaas-exporter to 500m on the management cluster in the spec:providerSpec:value:kaas:management:helmReleases: section as described in {{ mos_name_abbr }} documentation: Underlay Kubernetes operations - Increase memory limits for cluster components.

[27732-1] OpenSearch PVC size custom settings are dismissed during deployment

Fixed in 11.6.0 and 12.7.0

The OpenSearch elasticsearch.persistentVolumeClaimSize custom setting is overwritten by logging.persistentVolumeClaimSize during deployment of a Container Cloud cluster of any type and is set to the default 30Gi.

Note

This issue does not block the OpenSearch cluster operations if the default retention time is set. The default setting is usually enough for the capacity size of this cluster.

The issue may affect the following Cluster releases:

  • 11.2.0 - 11.5.0

  • 7.8.0 - 7.11.0

  • 8.8.0 - 8.10.0, 12.5.0 (MOSK clusters)

  • 10.2.4 - 10.8.1 (attached MKE 3.4.x clusters)

  • 13.0.2 - 13.5.1 (attached MKE 3.5.x clusters)

To verify that the cluster is affected:

Note

In the commands below, substitute parameters enclosed in angle brackets to match the affected cluster values.

kubectl --kubeconfig=<managementClusterKubeconfigPath> \
-n <affectedClusterProjectName> \
get cluster <affectedClusterName> \
-o=jsonpath='{.spec.providerSpec.value.helmReleases[*].values.elasticsearch.persistentVolumeClaimSize}' | xargs echo config size:


kubectl --kubeconfig=<affectedClusterKubeconfigPath> \
-n stacklight get pvc -l 'app=opensearch-master' \
-o=jsonpath="{.items[*].status.capacity.storage}" | xargs echo capacity sizes:
  • The cluster is not affected if the configuration size value matches or is less than any capacity size. For example:

    config size: 30Gi
    capacity sizes: 30Gi 30Gi 30Gi
    
    config size: 50Gi
    capacity sizes: 100Gi 100Gi 100Gi
    
  • The cluster is affected if the configuration size is larger than any capacity size. For example:

    config size: 200Gi
    capacity sizes: 100Gi 100Gi 100Gi
    

Workaround for a new cluster creation:

  1. Select from the following options:

    • For a management or regional cluster, during the bootstrap procedure, open cluster.yaml.template for editing.

    • For a managed cluster, open the Cluster object for editing.

      Caution

      For a managed cluster, use the Container Cloud API instead of the web UI for cluster creation.

  2. In the opened .yaml file, add logging.persistentVolumeClaimSize along with elasticsearch.persistentVolumeClaimSize. For example:

    apiVersion: cluster.k8s.io/v1alpha1
    spec:
    ...
      providerSpec:
        value:
        ...
          helmReleases:
          - name: stacklight
            values:
              elasticsearch:
                persistentVolumeClaimSize: 100Gi
              logging:
                enabled: true
                persistentVolumeClaimSize: 100Gi
    
  3. Continue the cluster deployment. The system will use the custom value set in logging.persistentVolumeClaimSize.

    Caution

    If elasticsearch.persistentVolumeClaimSize is absent in the .yaml file, the Admission Controller blocks the configuration update.

Workaround for an existing cluster:

Caution

During the application of the below workarounds, a short outage of OpenSearch and its dependent components may occur with the following alerts firing on the cluster. This behavior is expected. Therefore, disregard these alerts.

StackLight alerts list firing during cluster update

Cluster size and outage probability level

Alert name

Label name and component

Any cluster with high probability

KubeStatefulSetOutage

statefulset=opensearch-master

KubeDeploymentOutage

  • deployment=opensearch-dashboards

  • deployment=metricbeat

Large cluster with average probability

KubePodsNotReady Removed in 17.0.0, 16.0.0, and 14.1.0

  • created_by_name="opensearch-master*"

  • created_by_name="opensearch-dashboards*"

  • created_by_name="metricbeat-*"

OpenSearchClusterStatusWarning

n/a

OpenSearchNumberOfPendingTasks

n/a

OpenSearchNumberOfInitializingShards

n/a

OpenSearchNumberOfUnassignedShards Removed in 2.27.0 (17.2.0 and 16.2.0)

n/a

Any cluster with low probability

KubeStatefulSetReplicasMismatch

statefulset=opensearch-master

KubeDeploymentReplicasMismatch

  • deployment=opensearch-dashboards

  • deployment=metricbeat

StackLight in HA mode with LVP provisioner for OpenSearch PVCs

Warning

After applying this workaround, the existing log data will be lost. Therefore, if required, migrate log data to a new persistent volume (PV).

  1. Move the existing log data to a new PV, if required.

  2. Increase the disk size for local volume provisioner (LVP).

  3. Scale down the opensearch-master StatefulSet with dependent resources to 0 and disable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 0 statefulset opensearch-master
    
    kubectl -n stacklight scale --replicas 0 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 0 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : true }}'
    
  4. Recreate the opensearch-master StatefulSet with the updated disk size.

    kubectl get statefulset opensearch-master -o yaml -n stacklight | sed 's/storage: 30Gi/storage: <pvcSize>/g' > opensearch-master.yaml
    
    kubectl -n stacklight delete statefulset opensearch-master
    
    kubectl create -f opensearch-master.yaml
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  5. Delete existing PVCs:

    kubectl delete pvc -l 'app=opensearch-master' -n stacklight
    

    Warning

    This command removes all existing logs data from PVCs.

  6. In the Cluster configuration, set the same logging.persistentVolumeClaimSize as the size of elasticsearch.persistentVolumeClaimSize. For example:

    apiVersion: cluster.k8s.io/v1alpha1
    kind: Cluster
    spec:
    ...
      providerSpec:
        value:
        ...
          helmReleases:
          - name: stacklight
            values:
              elasticsearch:
                persistentVolumeClaimSize: 100Gi
              logging:
                enabled: true
                persistentVolumeClaimSize: 100Gi
    
  7. Scale up the opensearch-master StatefulSet with dependent resources and enable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 3 statefulset opensearch-master
    
    sleep 100
    
    kubectl -n stacklight scale --replicas 1 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 1 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : false }}'
    
StackLight in non-HA mode with an expandable StorageClass for OpenSearch PVCs

Note

To verify whether a StorageClass is expandable:

kubectl -n stacklight get pvc | grep opensearch-master | awk '{print $6}' | xargs -I{} kubectl get storageclass {} -o yaml | grep 'allowVolumeExpansion: true'

A positive system response is allowVolumeExpansion: true. A negative system response is blank or false.

  1. Scale down the opensearch-master StatefulSet with dependent resources to 0 and disable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 0 statefulset opensearch-master
    
    kubectl -n stacklight scale --replicas 0 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 0 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : true }}'
    
  2. Recreate the opensearch-master StatefulSet with the updated disk size.

    kubectl -n stacklight get statefulset opensearch-master -o yaml | sed 's/storage: 30Gi/storage: <pvc_size>/g' > opensearch-master.yaml
    
    kubectl -n stacklight delete statefulset opensearch-master
    
    kubectl create -f opensearch-master.yaml
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  3. Patch the PVCs with the new elasticsearch.persistentVolumeClaimSize value:

    kubectl -n stacklight patch pvc opensearch-master-opensearch-master-0 -p  '{ "spec": { "resources": { "requests": { "storage": "<pvc_size>" }}}}'
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  4. In the Cluster configuration, set logging.persistentVolumeClaimSize the same as the size of elasticsearch.persistentVolumeClaimSize. For example:

     apiVersion: cluster.k8s.io/v1alpha1
     kind: Cluster
     spec:
     ...
       providerSpec:
         value:
         ...
           helmReleases:
           - name: stacklight
             values:
               elasticsearch:
                 persistentVolumeClaimSize: 100Gi
               logging:
                 enabled: true
                 persistentVolumeClaimSize: 100Gi
    
  5. Scale up the opensearch-master StatefulSet with dependent resources to 1 and enable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 1 statefulset opensearch-master
    
    sleep 100
    
    kubectl -n stacklight scale --replicas 1 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 1 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : false }}'
    
StackLight in non-HA mode with a non-expandable StorageClass and no LVP for OpenSearch PVCs

Warning

After applying this workaround, the existing log data will be lost. Depending on your custom provisioner, you may find a third-party tool, such as pv-migrate, that provides a possibility to copy all data from one PV to another.

If data loss is acceptable, proceed with the workaround below.

Note

To verify whether a StorageClass is expandable:

kubectl -n stacklight get pvc | grep opensearch-master | awk '{print $6}' | xargs -I{} kubectl get storageclass {} -o yaml | grep 'allowVolumeExpansion: true'

A positive system response is allowVolumeExpansion: true. A negative system response is blank or false.

  1. Scale down the opensearch-master StatefulSet with dependent resources to 0 and disable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 0 statefulset opensearch-master
    
    kubectl -n stacklight scale --replicas 0 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 0 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : true }}'
    
  2. Recreate the opensearch-master StatefulSet with the updated disk size:

    kubectl get statefulset opensearch-master -o yaml -n stacklight | sed 's/storage: 30Gi/storage: <<pvc_size>>/g' > opensearch-master.yaml
    
    kubectl -n stacklight delete statefulset opensearch-master
    
    kubectl create -f opensearch-master.yaml
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  3. Delete existing PVCs:

    kubectl delete pvc -l 'app=opensearch-master' -n stacklight
    

    Warning

    This command removes all existing logs data from PVCs.

  4. In the Cluster configuration, set logging.persistentVolumeClaimSize to the same value as the size of the elasticsearch.persistentVolumeClaimSize parameter. For example:

     apiVersion: cluster.k8s.io/v1alpha1
     kind: Cluster
     spec:
     ...
       providerSpec:
         value:
         ...
           helmReleases:
           - name: stacklight
             values:
               elasticsearch:
                 persistentVolumeClaimSize: 100Gi
               logging:
                 enabled: true
                 persistentVolumeClaimSize: 100Gi
    
  5. Scale up the opensearch-master StatefulSet with dependent resources to 1 and enable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 1 statefulset opensearch-master
    
    sleep 100
    
    kubectl -n stacklight scale --replicas 1 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 1 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : false }}'
    
[27732-2] Custom settings for ‘elasticsearch.logstashRetentionTime’ are dismissed

Fixed in 11.6.0 and 12.7.0

Custom settings for the deprecated elasticsearch.logstashRetentionTime parameter are overwritten by the default setting set to 1 day.

The issue may affect the following Cluster releases with enabled elasticsearch.logstashRetentionTime:

  • 11.2.0 - 11.5.0

  • 7.8.0 - 7.11.0

  • 8.8.0 - 8.10.0, 12.5.0 (MOSK clusters)

  • 10.2.4 - 10.8.1 (attached MKE 3.4.x clusters)

  • 13.0.2 - 13.5.1 (attached MKE 3.5.x clusters)

As a workaround, in the Cluster object, replace elasticsearch.logstashRetentionTime with elasticsearch.retentionTime that was implemented to replace the deprecated parameter. For example:

apiVersion: cluster.k8s.io/v1alpha1
kind: Cluster
spec:
  ...
  providerSpec:
    value:
    ...
      helmReleases:
      - name: stacklight
        values:
          elasticsearch:
            retentionTime:
              logstash: 10
              events: 10
              notifications: 10
          logging:
            enabled: true

For the StackLight configuration procedure and parameters description, refer to Configure StackLight.

[20876] StackLight pods get stuck with the ‘NodeAffinity failed’ error

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshoot StackLight.

On a managed cluster, the StackLight pods may get stuck with the Pod predicate NodeAffinity failed error in the pod status. The issue may occur if the StackLight node label was added to one machine and then removed from another one.

The issue does not affect the StackLight services, all required StackLight pods migrate successfully except extra pods that are created and stuck during pod migration.

As a workaround, remove the stuck pods:

kubectl --kubeconfig <managedClusterKubeconfig> -n stacklight delete pod <stuckPodName>

Ceph
[26820] ‘KaaSCephCluster’ does not reflect issues during Ceph cluster deletion

Fixed in 2.22.0

The status section in the KaaSCephCluster.status CR does not reflect issues during the process of a Ceph cluster deletion.

As a workaround, inspect Ceph Controller logs on the managed cluster:

kubectl --kubeconfig <managedClusterKubeconfig> -n ceph-lcm-mirantis logs <ceph-controller-pod-name>
[26441] Cluster update fails with the MountDevice failed for volume warning

Update of a managed cluster based on bare metal and Ceph enabled fails with PersistentVolumeClaim getting stuck in the Pending state for the prometheus-server StatefulSet and the MountVolume.MountDevice failed for volume warning in the StackLight event logs.

Workaround:

  1. Verify that the description of the Pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    

    In the command above, replace the following values:

    • <affectedProjectName> is the Container Cloud project name where the Pods failed to run

    • <affectedPodName> is a Pod name that failed to run in the specified project

    In the Pod description, identify the node name where the Pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the Pod that fails to 0 replicas.

  4. On every csi-rbdplugin Pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected Pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state becomes Running.


Management cluster upgrade
[26740] Failure to upgrade a management cluster with a custom certificate

Fixed in 2.21.0

An upgrade of a Container Cloud management cluster with a custom Keycloak or web UI TLS certificate fails with the following example error:

failed to update management cluster: \
admission webhook "validations.kaas.mirantis.com" denied the request: \
failed to validate TLS spec for Cluster 'default/kaas-mgmt': \
desired hostname is not set for 'ui'

Workaround:

Verify that the tls section of the management cluster contains the hostname and certificate fields for configured applications:

  1. Open the management Cluster object for editing:

    kubectl edit cluster <mgmtClusterName>
    
  2. Verify that the tls section contains the following fields:

    tls:
      keycloak:
        certificate:
          name: keycloak
        hostname: <keycloakHostName>
        tlsConfigRef: “” or “keycloak”
      ui:
        certificate:
          name: ui
        hostname: <webUIHostName>
        tlsConfigRef: “” or “ui”
    

Container Cloud web UI
[26416] Failure to upload an MKE client bundle during cluster attachment

Fixed in 7.11.0, 11.5.0 and 12.5.0

During attachment of an existing MKE cluster using the Container Cloud web UI, uploading of an MKE client bundle fails with a false-positive message about a successful uploading.

Workaround:

Select from the following options:

  • Fill in the required fields for the MKE client bundle manually.

  • In the Attach Existing MKE Cluster window, use upload MKE client bundle twice to upload ucp.bundle-admin.zip and ucp-docker-bundle.zip located in the first archive.

[23002] Inability to set a custom value for a predefined node label

Fixed in 7.11.0, 11.5.0 and 12.5.0

During machine creation using the Container Cloud web UI, a custom value for a node label cannot be set.

As a workaround, manually add the value to spec.providerSpec.value.nodeLabels in machine.yaml.


Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.20.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.33.5

aws-credentials-controller

1.33.5

Azure Updated

azure-provider

1.33.5

azure-credentials-controller

1.33.5

Bare metal

ambassador

1.20.1-alpine

baremetal-operator Updated

6.3.3

baremetal-public-api Updated

6.3.3

baremetal-provider Updated

1.33.5

baremetal-resource-controller Updated

base-focal-20220627134752

ironic Updated

yoga-focal-20220719132049

ironic-operator Removed

n/a

kaas-ipam

base-focal-20220503165133

keepalived

2.1.5

local-volume-provisioner

2.5.0-mcp

mariadb

10.4.17-bionic-20220113085105

IAM Updated

iam

2.4.31

iam-controller

1.33.5

keycloak

18.0.0

Container Cloud

admission-controller Updated

1.33.5

agent-controller Updated

1.33.5

byo-credentials-controller Updated

1.33.5

byo-provider Updated

1.33.5

ceph-kcc-controller Updated

1.33.5

cert-manager Updated

1.33.5

client-certificate-controller Updated

1.33.5

golang

1.17.6

event-controller Updated

1.33.5

kaas-public-api Updated

1.33.5

kaas-exporter Updated

1.33.5

kaas-ui Updated

1.33.6

lcm-controller Updated

0.3.0-285-g8498abe0

license-controller Updated

1.33.5

machinepool-controller Updated

1.33.5

mcc-cache Updated

1.33.5

portforward-controller Updated

1.33.5

proxy-controller Updated

1.33.5

rbac-controller Updated

1.33.5

release-controller Updated

1.33.5

rhellicense-controller Updated

1.33.5

scope-controller Updated

1.33.5

user-controller Updated

1.33.5

Equinix Metal

equinix-provider Updated

1.33.5

equinix-credentials-controller Updated

1.33.5

keepalived

2.1.5

OpenStack Updated

openstack-provider

1.33.5

os-credentials-controller

1.33.5

VMware vSphere

vsphere-provider Updated

1.33.7

vsphere-credentials-controller Updated

1.33.5

keepalived

2.1.5

squid-proxy

0.0.1-6

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.20.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-6.3.3.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-6.3.3.tgz

ironic-python-agent.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-yoga-focal-debug-20220801150933

ironic-python-agent.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-yoga-focal-debug-20220801150933

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-6.3.3.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-2.5.0-mcp.tgz

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-104-6e2e82c.tgz

target ubuntu system

https://binary.mirantis.com/bm/bin/efi/ubuntu/tgz-bionic-20210622161844

Docker images

ambassador

mirantis.azurecr.io/general/external/docker.io/library/nginx:1.20.1-alpine

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-focal-20220611131433

baremetal-resource-controller Updated

mirantis.azurecr.io/bm/baremetal-resource-controller:base-focal-20220627134752

dynamic_ipxe Updated

mirantis.azurecr.io/bm/dynamic-ipxe:base-focal-20220805114906

dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-focal-20220705175454

dnsmasq-controller Updated

mirantis.azurecr.io/bm/dnsmasq-controller:base-focal-20220704102028

ironic Updated

mirantis.azurecr.io/openstack/ironic:yoga-focal-20220719132049

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:yoga-focal-20220719132049

ironic-operator Removed

n/a

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20220602121226

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-focal-20220503165133

mariadb

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20220113085105

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.14.0-1-g8725814

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-focal-20220128103433


Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.33.5.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.33.5.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.33.5.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.33.5.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.33.5.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.33.5.tgz

azure-credentials-controller

https://binary.mirantis.com/core/helm/azure-credentials-controller-1.33.5.tgz

azure-provider

https://binary.mirantis.com/core/helm/azure-provider-1.33.5.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.33.5.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.33.5.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.33.5.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.33.5.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.33.5.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.33.5.tgz

equinix-credentials-controller

https://binary.mirantis.com/core/helm/equinix-credentials-controller-1.33.5.tgz

equinix-provider

https://binary.mirantis.com/core/helm/equinix-provider-1.33.5.tgz

equinixmetalv2-provider

https://binary.mirantis.com/core/helm/equinixmetalv2-provider-1.33.5.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.33.5.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.33.5.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.33.5.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.33.5.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.33.6.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.33.5.tgz

license-controller Updated

https://binary.mirantis.com/core/helm/license-controller-1.33.5.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.33.5.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.33.5.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.33.5.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.33.5.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.33.5.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.33.5.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.33.5.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.33.5.tgz

scope-controller

http://binary.mirantis.com/core/helm/scope-controller-1.33.5.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.33.5.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.33.5.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.33.7.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.33.5.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.33.5

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.33.5

aws-cluster-api-controller Updated

mirantis.azurecr.io/core/aws-cluster-api-controller:1.33.5

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.33.5

azure-cluster-api-controller Updated

mirantis.azurecr.io/core/azure-cluster-api-controller:1.33.5

azure-credentials-controller Updated

mirantis.azurecr.io/core/azure-credentials-controller:1.33.5

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.33.5

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.33.5

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.33.5

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.6.1

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.33.5

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.33.5

cluster-api-provider-equinix Updated

mirantis.azurecr.io/core/equinix-cluster-api-controller:1.33.5

equinix-credentials-controller Updated

mirantis.azurecr.io/core/equinix-credentials-controller:1.33.5

frontend Updated

mirantis.azurecr.io/core/frontend:1.33.5

haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.17.0-8-g6ca89d5

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.33.5

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.33.5

lcm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-285-g8498abe0

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.33.5

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.14.0-1-g8725814

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.33.5

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.33.5

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.33.5

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.33.5

registry

mirantis.azurecr.io/lcm/registry:2.7.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.33.5

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.33.5

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.33.5

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-6

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.33.5

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.33.5

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.33.5


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

Helm charts Updated

iam

http://binary.mirantis.com/iam/helm/iam-2.4.31.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.13.tgz

keycloak_proxy

http://binary.mirantis.com/core/helm/keycloak_proxy-1.33.5.tgz

Docker images

kubernetes-entrypoint

mirantis.azurecr.io/iam/external/kubernetes-entrypoint:v0.3.1

mariadb

mirantis.azurecr.io/general/mariadb:10.4.16-bionic-20201105025052

keycloak Updated

mirantis.azurecr.io/iam/keycloak:0.5.10

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-2

2.19.0

The Mirantis Container Cloud GA release 2.19.0:

  • Introduces support for the Cluster release 11.3.0 that is based on Mirantis Container Runtime 20.10.11 and Mirantis Kubernetes Engine 3.5.3 with Kubernetes 1.21.

  • Introduces support for the Cluster release 7.9.0 that is based on Mirantis Container Runtime 20.10.11 and Mirantis Kubernetes Engine 3.4.9 with Kubernetes 1.20.

  • Supports the Cluster release 8.8.0 that is based on the Cluster release 7.8.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 22.3.

  • Does not support greenfield deployments on deprecated Cluster releases 11.2.0, 8.6.0, and 7.8.0. Use the latest Cluster releases of the series instead.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.19.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.19.0. For the list of enhancements in the Cluster releases 11.3.0 and 7.9.0 that are introduced by the Container Cloud release 2.19.0, see the Cluster releases (managed).


General availability support for machines upgrade order

Implemented full support for the upgrade sequence of machines that allows prioritized machines to be upgraded first. You can now set the upgrade index on an existing machine or machine pool using the Container Cloud web UI.

Consider the following upgrade index specifics:

  • The first machine to upgrade is always one of the control plane machines with the lowest upgradeIndex. Other control plane machines are upgraded one by one according to their upgrade indexes.

  • If the Cluster spec dedicatedControlPlane field is false, worker machines are upgraded only after the upgrade of all control plane machines finishes. Otherwise, they are upgraded after the first control plane machine, concurrently with other control plane machines.

  • If several machines have the same upgrade index, they have the same priority during upgrade.

  • If the value is not set, the machine is automatically assigned a value of the upgrade index.

Web UI support for booting an OpenStack machine from a volume

TechPreview

Implemented the Boot From Volume option for the OpenStack machine creation wizard in the Container Cloud web UI. The feature allows booting OpenStack-based machines from a block storage volume.

The feature is beneficial for clouds that do not have enough space on hypervisors. After enabling this option, the Cinder storage is used instead of the Nova storage.

Modification of network configuration on machines

TechPreview

Enabled the ability to modify existing network configuration on running bare metal clusters with a mandatory approval of new settings by an Infrastructure Operator. This validation is required to prevent accidental cluster failures due to misconfiguration.

After you make necessary network configuration changes in the required L2 template, you now need to approve the changes by setting the spec.netconfigUpdateAllow:true flag in each affected IpamHost object.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

New format of log entries on management clusters

Implemented a new format of log entries for cluster and machine logs of a management cluster. Each log entry now contains a request ID that identifies chronology of actions performed on a cluster or machine. The feature applies to all supported cloud providers.

The new format is <providerType>.<objectName>.req:<requestID>. For example, bm.machine.req:374, bm.cluster.req:172.

  • <providerType> - provider name, possible values: aws, azure, os, bm, vsphere, equinix.

  • <objectName> - name of an object being processed by provider, possible values: cluster, machine.

  • <requestID> - request ID number that increases when a provider receives a request from Kubernetes about creating, updating, deleting an object. The request ID allows combining all operations performed with an object within one request. For example, the result of a machine creation, update of its statuses, and so on.

Extended and basic versions of logs

Implemented the --extended flag for collecting the extended version of logs that contains system and MKE logs, logs from LCM Ansible and LCM Agent along with cluster events and Kubernetes resources description and logs. You can use this flag to collect logs on any cluster type.

Distribution selector for bare metal machines in web UI

Added the Distribution field to the bare metal machine creation wizard in the Container Cloud web UI. The default operating system in the distribution list is Ubuntu 20.04.

Caution

Do not use the outdated Ubuntu 18.04 distribution on greenfield deployments but only on existing clusters based on Ubuntu 18.04.

Removal of Helm v2 support from Helm Controller

After switching all remaining OpenStack Helm releases from v2 to v3, dropped support for Helm v2 in Helm Controller and removed the Tiller image for all related components.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.19.0 along with the Cluster releases 11.3.0 and 7.9.0:

  • [16379, 23865] Fixed the issue that caused an Equinix-based management or managed cluster update to fail with the FailedAttachVolume and FailedMount warnings.

  • [24286] Fixed the issue wherein creation of a new Equinix-based managed cluster failed due to failure to release a new vRouter ID.

  • [24722] Fixed the issue that caused Ceph clusters to be broken on Equinix-based managed clusters deployed on a Container Cloud instance with a non-default (different from region-one) region configured.

  • [24806] Fixed the issue wherein the dhcp-option=tag parameters were not applied to dnsmasq.conf during the bootstrap of a bare metal management cluster with a multi-rack topology.

  • [17778] Fixed the issue wherein the Container Cloud web UI displayed the new release version while update for some nodes was still in progress.

  • [24676] Fixed the issue wherein the deployment of an Equinix-based management cluster failed with the following error message:

    Failed waiting for OIDC configuration readiness: timed out waiting for the
    condition
    
  • [25050] For security reasons, disabled the deprecated TLS v1.0 and v1.1 for the mcc-cache and kaas-ui Container Cloud services.

  • [25256] Optimized the number of simultaneous connections to etcd to be open during configuration of Calico policies.

  • [24914] Fixed the issue wherein Helm Controller was getting stuck during readiness checks due to the timeout for helmclient being not set.

  • [24317] Fixed a number of security vulnerabilities in the Container Cloud Docker images:

    • Updated the following Docker images to fix CVE-2022-24407 and CVE-2022-0778:

      • admission-controller

      • agent-controller

      • aws-cluster-api-controller

      • aws-credentials-controller

      • azure-cluster-api-controller

      • azure-credentials-controller

      • bootstrap-controller

      • byo-cluster-api-controller

      • byo-credentials-controller

      • ceph-kcc-controller

      • cluster-api-provider-baremetal

      • equinix-cluster-api-controller

      • equinix-credentials-controller

      • event-controller

      • iam-controller

      • imc-sync

      • kaas-exporter

      • kproxy

      • license-controller

      • machinepool-controller

      • openstack-cluster-api-controller

      • os-credentials-controller

      • portforward-controller

      • proxy-controller

      • rbac-controller

      • release-controller

      • rhellicense-controller

      • scope-controller

      • storage-discovery

      • user-controller

      • vsphere-cluster-api-controller

      • vsphere-credentials-controller

    • Updated aws-ebs-csi-driver to fix the following Amazon Linux Security Advisories:

    • Updated keycloak to fix the following security vulnerabilities:

    • Updated busybox, iam/api, iam/helm, and nginx to fix CVE-2022-28391

    • Updated frontend to fix CVE-2022-27404

    • Updated kube-proxy to fix CVE-2022-1292

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.19.0 including the Cluster releases 11.3.0 and 7.9.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


MKE
[20651] A cluster deployment or update fails with not ready compose deployments

A managed cluster deployment, attachment, or update to a Cluster release with MKE versions 3.3.13, 3.4.6, 3.5.1, or earlier may fail with the compose pods flapping (ready > terminating > pending) and with the following error message appearing in logs:

'not ready: deployments: kube-system/compose got 0/0 replicas, kube-system/compose-api
 got 0/0 replicas'
 ready: false
 type: Kubernetes

Workaround:

  1. Disable Docker Content Trust (DCT):

    1. Access the MKE web UI as admin.

    2. Navigate to Admin > Admin Settings.

    3. In the left navigation pane, click Docker Content Trust and disable it.

  2. Restart the affected deployments such as calico-kube-controllers, compose, compose-api, coredns, and so on:

    kubectl -n kube-system delete deployment <deploymentName>
    

    Once done, the cluster deployment or update resumes.

  3. Re-enable DCT.



Bare metal
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.

[20736] Region deletion failure after regional deployment failure

If a baremetal-based regional cluster deployment fails before pivoting is done, the corresponding region deletion fails.

Workaround:

Using the command below, manually delete all possible traces of the failed regional cluster deployment, including but not limited to the following objects that contain the kaas.mirantis.com/region label of the affected region:

  • cluster

  • machine

  • baremetalhost

  • baremetalhostprofile

  • l2template

  • subnet

  • ipamhost

  • ipaddr

kubectl delete <objectName> -l kaas.mirantis.com/region=<regionName>

Warning

Do not use the same region name again after the regional cluster deployment failure since some objects that reference the region name may still exist.



StackLight
[27732-1] OpenSearch PVC size custom settings are dismissed during deployment

Fixed in 11.6.0 and 12.7.0

The OpenSearch elasticsearch.persistentVolumeClaimSize custom setting is overwritten by logging.persistentVolumeClaimSize during deployment of a Container Cloud cluster of any type and is set to the default 30Gi.

Note

This issue does not block the OpenSearch cluster operations if the default retention time is set. The default setting is usually enough for the capacity size of this cluster.

The issue may affect the following Cluster releases:

  • 11.2.0 - 11.5.0

  • 7.8.0 - 7.11.0

  • 8.8.0 - 8.10.0, 12.5.0 (MOSK clusters)

  • 10.2.4 - 10.8.1 (attached MKE 3.4.x clusters)

  • 13.0.2 - 13.5.1 (attached MKE 3.5.x clusters)

To verify that the cluster is affected:

Note

In the commands below, substitute parameters enclosed in angle brackets to match the affected cluster values.

kubectl --kubeconfig=<managementClusterKubeconfigPath> \
-n <affectedClusterProjectName> \
get cluster <affectedClusterName> \
-o=jsonpath='{.spec.providerSpec.value.helmReleases[*].values.elasticsearch.persistentVolumeClaimSize}' | xargs echo config size:


kubectl --kubeconfig=<affectedClusterKubeconfigPath> \
-n stacklight get pvc -l 'app=opensearch-master' \
-o=jsonpath="{.items[*].status.capacity.storage}" | xargs echo capacity sizes:
  • The cluster is not affected if the configuration size value matches or is less than any capacity size. For example:

    config size: 30Gi
    capacity sizes: 30Gi 30Gi 30Gi
    
    config size: 50Gi
    capacity sizes: 100Gi 100Gi 100Gi
    
  • The cluster is affected if the configuration size is larger than any capacity size. For example:

    config size: 200Gi
    capacity sizes: 100Gi 100Gi 100Gi
    

Workaround for a new cluster creation:

  1. Select from the following options:

    • For a management or regional cluster, during the bootstrap procedure, open cluster.yaml.template for editing.

    • For a managed cluster, open the Cluster object for editing.

      Caution

      For a managed cluster, use the Container Cloud API instead of the web UI for cluster creation.

  2. In the opened .yaml file, add logging.persistentVolumeClaimSize along with elasticsearch.persistentVolumeClaimSize. For example:

    apiVersion: cluster.k8s.io/v1alpha1
    spec:
    ...
      providerSpec:
        value:
        ...
          helmReleases:
          - name: stacklight
            values:
              elasticsearch:
                persistentVolumeClaimSize: 100Gi
              logging:
                enabled: true
                persistentVolumeClaimSize: 100Gi
    
  3. Continue the cluster deployment. The system will use the custom value set in logging.persistentVolumeClaimSize.

    Caution

    If elasticsearch.persistentVolumeClaimSize is absent in the .yaml file, the Admission Controller blocks the configuration update.

Workaround for an existing cluster:

Caution

During the application of the below workarounds, a short outage of OpenSearch and its dependent components may occur with the following alerts firing on the cluster. This behavior is expected. Therefore, disregard these alerts.

StackLight alerts list firing during cluster update

Cluster size and outage probability level

Alert name

Label name and component

Any cluster with high probability

KubeStatefulSetOutage

statefulset=opensearch-master

KubeDeploymentOutage

  • deployment=opensearch-dashboards

  • deployment=metricbeat

Large cluster with average probability

KubePodsNotReady Removed in 17.0.0, 16.0.0, and 14.1.0

  • created_by_name="opensearch-master*"

  • created_by_name="opensearch-dashboards*"

  • created_by_name="metricbeat-*"

OpenSearchClusterStatusWarning

n/a

OpenSearchNumberOfPendingTasks

n/a

OpenSearchNumberOfInitializingShards

n/a

OpenSearchNumberOfUnassignedShards Removed in 2.27.0 (17.2.0 and 16.2.0)

n/a

Any cluster with low probability

KubeStatefulSetReplicasMismatch

statefulset=opensearch-master

KubeDeploymentReplicasMismatch

  • deployment=opensearch-dashboards

  • deployment=metricbeat

StackLight in HA mode with LVP provisioner for OpenSearch PVCs

Warning

After applying this workaround, the existing log data will be lost. Therefore, if required, migrate log data to a new persistent volume (PV).

  1. Move the existing log data to a new PV, if required.

  2. Increase the disk size for local volume provisioner (LVP).

  3. Scale down the opensearch-master StatefulSet with dependent resources to 0 and disable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 0 statefulset opensearch-master
    
    kubectl -n stacklight scale --replicas 0 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 0 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : true }}'
    
  4. Recreate the opensearch-master StatefulSet with the updated disk size.

    kubectl get statefulset opensearch-master -o yaml -n stacklight | sed 's/storage: 30Gi/storage: <pvcSize>/g' > opensearch-master.yaml
    
    kubectl -n stacklight delete statefulset opensearch-master
    
    kubectl create -f opensearch-master.yaml
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  5. Delete existing PVCs:

    kubectl delete pvc -l 'app=opensearch-master' -n stacklight
    

    Warning

    This command removes all existing logs data from PVCs.

  6. In the Cluster configuration, set the same logging.persistentVolumeClaimSize as the size of elasticsearch.persistentVolumeClaimSize. For example:

    apiVersion: cluster.k8s.io/v1alpha1
    kind: Cluster
    spec:
    ...
      providerSpec:
        value:
        ...
          helmReleases:
          - name: stacklight
            values:
              elasticsearch:
                persistentVolumeClaimSize: 100Gi
              logging:
                enabled: true
                persistentVolumeClaimSize: 100Gi
    
  7. Scale up the opensearch-master StatefulSet with dependent resources and enable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 3 statefulset opensearch-master
    
    sleep 100
    
    kubectl -n stacklight scale --replicas 1 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 1 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : false }}'
    
StackLight in non-HA mode with an expandable StorageClass for OpenSearch PVCs

Note

To verify whether a StorageClass is expandable:

kubectl -n stacklight get pvc | grep opensearch-master | awk '{print $6}' | xargs -I{} kubectl get storageclass {} -o yaml | grep 'allowVolumeExpansion: true'

A positive system response is allowVolumeExpansion: true. A negative system response is blank or false.

  1. Scale down the opensearch-master StatefulSet with dependent resources to 0 and disable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 0 statefulset opensearch-master
    
    kubectl -n stacklight scale --replicas 0 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 0 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : true }}'
    
  2. Recreate the opensearch-master StatefulSet with the updated disk size.

    kubectl -n stacklight get statefulset opensearch-master -o yaml | sed 's/storage: 30Gi/storage: <pvc_size>/g' > opensearch-master.yaml
    
    kubectl -n stacklight delete statefulset opensearch-master
    
    kubectl create -f opensearch-master.yaml
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  3. Patch the PVCs with the new elasticsearch.persistentVolumeClaimSize value:

    kubectl -n stacklight patch pvc opensearch-master-opensearch-master-0 -p  '{ "spec": { "resources": { "requests": { "storage": "<pvc_size>" }}}}'
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  4. In the Cluster configuration, set logging.persistentVolumeClaimSize the same as the size of elasticsearch.persistentVolumeClaimSize. For example:

     apiVersion: cluster.k8s.io/v1alpha1
     kind: Cluster
     spec:
     ...
       providerSpec:
         value:
         ...
           helmReleases:
           - name: stacklight
             values:
               elasticsearch:
                 persistentVolumeClaimSize: 100Gi
               logging:
                 enabled: true
                 persistentVolumeClaimSize: 100Gi
    
  5. Scale up the opensearch-master StatefulSet with dependent resources to 1 and enable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 1 statefulset opensearch-master
    
    sleep 100
    
    kubectl -n stacklight scale --replicas 1 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 1 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : false }}'
    
StackLight in non-HA mode with a non-expandable StorageClass and no LVP for OpenSearch PVCs

Warning

After applying this workaround, the existing log data will be lost. Depending on your custom provisioner, you may find a third-party tool, such as pv-migrate, that provides a possibility to copy all data from one PV to another.

If data loss is acceptable, proceed with the workaround below.

Note

To verify whether a StorageClass is expandable:

kubectl -n stacklight get pvc | grep opensearch-master | awk '{print $6}' | xargs -I{} kubectl get storageclass {} -o yaml | grep 'allowVolumeExpansion: true'

A positive system response is allowVolumeExpansion: true. A negative system response is blank or false.

  1. Scale down the opensearch-master StatefulSet with dependent resources to 0 and disable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 0 statefulset opensearch-master
    
    kubectl -n stacklight scale --replicas 0 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 0 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : true }}'
    
  2. Recreate the opensearch-master StatefulSet with the updated disk size:

    kubectl get statefulset opensearch-master -o yaml -n stacklight | sed 's/storage: 30Gi/storage: <<pvc_size>>/g' > opensearch-master.yaml
    
    kubectl -n stacklight delete statefulset opensearch-master
    
    kubectl create -f opensearch-master.yaml
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  3. Delete existing PVCs:

    kubectl delete pvc -l 'app=opensearch-master' -n stacklight
    

    Warning

    This command removes all existing logs data from PVCs.

  4. In the Cluster configuration, set logging.persistentVolumeClaimSize to the same value as the size of the elasticsearch.persistentVolumeClaimSize parameter. For example:

     apiVersion: cluster.k8s.io/v1alpha1
     kind: Cluster
     spec:
     ...
       providerSpec:
         value:
         ...
           helmReleases:
           - name: stacklight
             values:
               elasticsearch:
                 persistentVolumeClaimSize: 100Gi
               logging:
                 enabled: true
                 persistentVolumeClaimSize: 100Gi
    
  5. Scale up the opensearch-master StatefulSet with dependent resources to 1 and enable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 1 statefulset opensearch-master
    
    sleep 100
    
    kubectl -n stacklight scale --replicas 1 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 1 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : false }}'
    
[27732-2] Custom settings for ‘elasticsearch.logstashRetentionTime’ are dismissed

Fixed in 11.6.0 and 12.7.0

Custom settings for the deprecated elasticsearch.logstashRetentionTime parameter are overwritten by the default setting set to 1 day.

The issue may affect the following Cluster releases with enabled elasticsearch.logstashRetentionTime:

  • 11.2.0 - 11.5.0

  • 7.8.0 - 7.11.0

  • 8.8.0 - 8.10.0, 12.5.0 (MOSK clusters)

  • 10.2.4 - 10.8.1 (attached MKE 3.4.x clusters)

  • 13.0.2 - 13.5.1 (attached MKE 3.5.x clusters)

As a workaround, in the Cluster object, replace elasticsearch.logstashRetentionTime with elasticsearch.retentionTime that was implemented to replace the deprecated parameter. For example:

apiVersion: cluster.k8s.io/v1alpha1
kind: Cluster
spec:
  ...
  providerSpec:
    value:
    ...
      helmReleases:
      - name: stacklight
        values:
          elasticsearch:
            retentionTime:
              logstash: 10
              events: 10
              notifications: 10
          logging:
            enabled: true

For the StackLight configuration procedure and parameters description, refer to Configure StackLight.

[20876] StackLight pods get stuck with the ‘NodeAffinity failed’ error

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshoot StackLight.

On a managed cluster, the StackLight pods may get stuck with the Pod predicate NodeAffinity failed error in the pod status. The issue may occur if the StackLight node label was added to one machine and then removed from another one.

The issue does not affect the StackLight services, all required StackLight pods migrate successfully except extra pods that are created and stuck during pod migration.

As a workaround, remove the stuck pods:

kubectl --kubeconfig <managedClusterKubeconfig> -n stacklight delete pod <stuckPodName>

Container Cloud web UI
[26416] Failure to upload an MKE client bundle during cluster attachment

Fixed in 7.11.0, 11.5.0 and 12.5.0

During attachment of an existing MKE cluster using the Container Cloud web UI, uploading of an MKE client bundle fails with a false-positive message about a successful uploading.

Workaround:

Select from the following options:

  • Fill in the required fields for the MKE client bundle manually.

  • In the Attach Existing MKE Cluster window, use upload MKE client bundle twice to upload ucp.bundle-admin.zip and ucp-docker-bundle.zip located in the first archive.

[23002] Inability to set a custom value for a predefined node label

Fixed in 7.11.0, 11.5.0 and 12.5.0

During machine creation using the Container Cloud web UI, a custom value for a node label cannot be set.

As a workaround, manually add the value to spec.providerSpec.value.nodeLabels in machine.yaml.


Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.19.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.32.4

aws-credentials-controller

1.32.4

Azure Updated

azure-provider

1.32.4

azure-credentials-controller

1.32.4

Bare metal

ambassador

1.20.1-alpine

baremetal-operator Updated

6.2.8

baremetal-public-api Updated

6.2.8

baremetal-provider Updated

1.32.4

baremetal-resource-controller Updated

base-focal-20220530195224

ironic Updated

xena-focal-20220603085546

ironic-operator Updated

base-focal-20220605090941

kaas-ipam Updated

base-focal-20220503165133

keepalived

2.1.5

local-volume-provisioner

2.5.0-mcp

mariadb

10.4.17-bionic-20220113085105

IAM

iam Updated

2.4.29

iam-controller Updated

1.32.4

keycloak

16.1.1

Container Cloud

admission-controller Updated

1.32.10

agent-controller Updated

1.32.4

byo-credentials-controller Updated

1.32.4

byo-provider Updated

1.32.4

ceph-kcc-controller Updated

1.32.8

cert-manager Updated

1.32.4

client-certificate-controller Updated

1.32.4

event-controller Updated

1.32.4

golang

1.17.6

kaas-public-api Updated

1.32.4

kaas-exporter Updated

1.32.4

kaas-ui Updated

1.32.10

lcm-controller Updated

0.3.0-257-ga93244da

license-controller Updated

1.32.4

machinepool-controller Updated

1.32.4

mcc-cache Updated

1.32.4

portforward-controller Updated

1.32.4

proxy-controller Updated

1.32.4

rbac-controller Updated

1.32.4

release-controller Updated

1.32.4

rhellicense-controller Updated

1.32.4

scope-controller Updated

1.32.4

user-controller Updated

1.32.4

Equinix Metal

equinix-provider Updated

1.32.4

equinix-credentials-controller Updated

1.32.4

keepalived

2.1.5

OpenStack Updated

openstack-provider

1.32.4

os-credentials-controller

1.32.4

VMware vSphere

vsphere-provider Updated

1.32.4

vsphere-credentials-controller Updated

1.32.4

keepalived

2.1.5

squid-proxy

0.0.1-6

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.19.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-6.2.8.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-6.2.8.tgz

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-xena-focal-debug-20220512084815

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-xena-focal-debug-20220512084815

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-6.2.8.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-2.5.0-mcp.tgz

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-104-6e2e82c.tgz

target ubuntu system

https://binary.mirantis.com/bm/bin/efi/ubuntu/tgz-bionic-20210622161844

Docker images

ambassador

mirantis.azurecr.io/general/external/docker.io/library/nginx:1.20.1-alpine

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-focal-20220611131433

baremetal-resource-controller Updated

mirantis.azurecr.io/bm/baremetal-resource-controller:base-focal-20220530195224

dynamic_ipxe

mirantis.azurecr.io/bm/dynamic-ipxe:base-focal-20220429170829

dnsmasq Updated

mirantis.azurecr.io/bm/baremetal-dnsmasq:base-focal-20220518104155

dnsmasq-controller Updated

mirantis.azurecr.io/bm/dnsmasq-controller:base-focal-20220620190158

ironic Updated

mirantis.azurecr.io/openstack/ironic:xena-focal-20220603085546

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:xena-focal-20220603085546

ironic-operator Updated

mirantis.azurecr.io/bm/ironic-operator:base-focal-20220605090941

ironic-prometheus-exporter Updated

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20220602121226

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-focal-20220503165133

mariadb

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20220113085105

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.14.0-1-g8725814

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-focal-20220128103433


Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.32.4.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.32.4.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.32.10.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.32.4.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.32.4.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.32.4.tgz

azure-credentials-controller

https://binary.mirantis.com/core/helm/azure-credentials-controller-1.32.4.tgz

azure-provider

https://binary.mirantis.com/core/helm/azure-provider-1.32.4.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.32.4.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.32.4.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.32.4.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.32.4.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.32.4.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.32.4.tgz

equinix-credentials-controller

https://binary.mirantis.com/core/helm/equinix-credentials-controller-1.32.4.tgz

equinix-provider

https://binary.mirantis.com/core/helm/equinix-provider-1.32.4.tgz

equinixmetalv2-provider

https://binary.mirantis.com/core/helm/equinixmetalv2-provider-1.32.4.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.32.4.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.32.4.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.32.4.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.32.4.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.32.10.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.32.4.tgz

license-controller Updated

https://binary.mirantis.com/core/helm/license-controller-1.32.4.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.32.4.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.32.4.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.32.4.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.32.4.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.32.4.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.32.4.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.32.4.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.32.4.tgz

scope-controller

http://binary.mirantis.com/core/helm/scope-controller-1.32.4.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.32.4.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.32.4.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.32.4.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.32.4.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.32.10

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.32.4

aws-cluster-api-controller Updated

mirantis.azurecr.io/core/aws-cluster-api-controller:1.32.4

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.32.4

azure-cluster-api-controller Updated

mirantis.azurecr.io/core/azure-cluster-api-controller:1.32.4

azure-credentials-controller Updated

mirantis.azurecr.io/core/azure-credentials-controller:1.32.4

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.32.4

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.32.4

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:1.32.8

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.6.1

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.32.4

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.32.4

cluster-api-provider-equinix Updated

mirantis.azurecr.io/core/equinix-cluster-api-controller:1.32.4

equinix-credentials-controller Updated

mirantis.azurecr.io/core/equinix-credentials-controller:1.32.4

frontend Updated

mirantis.azurecr.io/core/frontend:1.32.10

haproxy Updated

mirantis.azurecr.io/lcm/mcc-haproxy:v0.17.0-8-g6ca89d5

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.32.4

kproxy Updated

mirantis.azurecr.io/core/kproxy:1.32.4

lcm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-257-ga93244da

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.32.4

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.14.0-1-g8725814

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.32.4

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.32.4

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.32.4

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.32.4

registry

mirantis.azurecr.io/lcm/registry:2.7.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.32.4

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.32.4

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.32.4

squid-proxy

mirantis.azurecr.io/lcm/squid-proxy:0.0.1-6

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-cluster-api-controller:1.32.4

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.32.4

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.32.4


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

Helm charts

iam Updated

http://binary.mirantis.com/iam/helm/iam-2.4.29.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.12.tgz

keycloak_proxy Updated

http://binary.mirantis.com/core/helm/keycloak_proxy-1.32.4.tgz

Docker images

kubernetes-entrypoint

mirantis.azurecr.io/iam/external/kubernetes-entrypoint:v0.3.1

mariadb

mirantis.azurecr.io/general/mariadb:10.4.16-bionic-20201105025052

keycloak Updated

mirantis.azurecr.io/iam/keycloak:0.5.8

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-2

2.18.1

The Mirantis Container Cloud GA release 2.18.1 is based on 2.18.0 and:

  • Introduces support for the Cluster release 8.8.0 that is based on the Cluster release 7.8.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 22.3. This Cluster release is based on the updated version of Mirantis Kubernetes Engine 3.4.8 with Kubernetes 1.20 and Mirantis Container Runtime 20.10.11.

  • Supports the latest Cluster releases 7.8.0 and 11.2.0.

  • Does not support new deployments based on the deprecated Cluster releases 11.1.0, 8.6.0, and 7.7.0.

For details about the Container Cloud release 2.18.1, refer to its parent release 2.18.0:

Caution

Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

2.18.0

The Mirantis Container Cloud GA release 2.18.0:

  • Introduces support for the Cluster release 11.2.0 that is based on Mirantis Container Runtime 20.10.8 and Mirantis Kubernetes Engine 3.5.1 with Kubernetes 1.21.

  • Introduces support for the Cluster release 7.8.0 that is based on Mirantis Container Runtime 20.10.8 and Mirantis Kubernetes Engine 3.4.7 with Kubernetes 1.20.

  • Supports the Cluster release 8.6.0 that is based on the Cluster release 7.6.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 22.2.

  • Does not support greenfield deployments on deprecated Cluster releases 11.1.0, 8.5.0, and 7.7.0. Use the latest Cluster releases of the series instead.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.18.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.18.0. For the list of enhancements in the Cluster releases 11.2.0 and 7.8.0 that are introduced by the Container Cloud release 2.18.0, see the Cluster releases (managed).


Ubuntu kernel update for bare metal clusters

Updated the Ubuntu kernel version to 5.4.0-109-generic for bare metal non-MOSK-based management, regional, and managed clusters to apply Ubuntu 18.04 or 20.04 security and system updates.

Caution

During a baremetal-based cluster update to Container Cloud 2.18 and to the latest Cluster releases 11.2.0 and 7.8.0, hosts will be restarted to apply the latest supported Ubuntu 18.04 or 20.04 packages. Therefore:

  • Depending on the cluster configuration, applying security updates and host restart can increase the update time for each node to up to 1 hour.

  • Cluster nodes are updated one by one. Therefore, for large clusters, the update may take several days to complete.

Support for Ubuntu 20.04 on greenfield vSphere deployments

Implemented full support for Ubuntu 20.04 LTS (Focal Fossa) as the default host operating system that now installs on management, regional, and managed clusters for the vSphere cloud provider.

Caution

Upgrading from Ubuntu 18.04 to 20.04 on existing deployments is not supported.

Booting a machine from a block storage volume for OpenStack provider

TechPreview

Implemented initial Technology Preview support for booting of the OpenStack-based machines from a block storage volume. The feature is beneficial for clouds that do not have enough space on hypervisors. After enabling this option, the Cinder storage is used instead of the Nova storage.

Using the Container Cloud API, you can boot the Bastion node, or the required management, regional, or managed cluster nodes from a volume.

Note

The ability to enable the boot from volume option using the Container Cloud web UI for managed clusters will be implemented in one of the following Container Cloud releases.

IPSec encryption for the Kubernetes workloads network

TechPreview Experimental since 2.19.0

Implemented initial Technology Preview support for enabling IPSec encryption for the Kubernetes workloads network. The feature allows for secure communication between servers.

You can enable encryption for the Kubernetes workloads network on greenfield deployments during initial creation of a management, regional, and managed cluster through the Cluster object using the secureOverlay parameter.

Caution

  • For the Azure cloud provider, the feature is not supported. For details, see MKE documentation: Kubernetes network encryption.

  • For the bare metal cloud provider and MOSK-based deployments, the feature support will become available in one of the following Container Cloud releases.

  • For existing deployments, the feature support will become available in one of the following Container Cloud releases.

Support for MITM proxy

TechPreview

Implemented the initial Technology Preview support for man-in-the-middle (MITM) proxies on offline OpenStack and non-MOSK-based bare metal deployments. Using trusted proxy CA certificates, the feature allows monitoring all cluster traffic for security and audit purposes.

Support for custom Docker registries

Implemented support for custom Docker registries configuration in the Container Cloud management, regional, and managed clusters. Using the ContainerRegistry custom resource, you can configure CA certificates on machines to access private Docker registries.

Note

For MOSK-based deployments, the feature support is available since Container Cloud 2.18.1.

Upgrade sequence for machines

TechPreview

Implemented initial Technology Preview support for machines upgrade index that allows prioritized machines to be upgraded first. During a machine or a machine pool creation, you can use the Container Cloud web UI Upgrade Index option to set a positive numeral value that defines the order of machine upgrade during cluster update.

To set the upgrade order on an existing cluster, use the Container Cloud API:

  • For a machine that is not assigned to a machine pool, add the upgradeIndex field with the required value to the spec:providerSpec:value section in the Machine object.

  • For a machine pool, add the upgradeIndex field with the required value to the spec:machineSpec:providerSpec:value section of the MachinePool object to apply the upgrade order to all machines in the pool.

Note

  • The first machine to upgrade is always one of the control plane machines with the lowest upgradeIndex. Other control plane machines are upgraded one by one according to their upgrade indexes. If the Cluster spec dedicatedControlPlane field is false, worker machines are upgraded only after the upgrade of all control plane machines finishes. Otherwise, they are upgraded after the first control plane machine, concurrently with other control plane machines.

  • If two or more machines have the same value of upgradeIndex, these machines are equally prioritized during upgrade.

  • Changing of the machine upgrade index during an already running cluster update or maintenance is not supported.

Enablement of Salesforce propagation to all clusters using web UI

Simplified the ability to enable automatic update and sync of the Salesforce configuration on all your clusters by adding the corresponding check box to the Salesforce settings in the Container Cloud web UI.

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added the following documentation:

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.18.0 along with the Cluster releases 11.2.0 and 7.8.0:

  • [24075] Fixed the issue with the Ubuntu 20.04 option not displaying in the operating systems drop-down list during machine creation for the AWS and Equinix Metal with public networking providers.

    Warning

    After Container Cloud is upgraded to 2.18.0, remove the values added during the workaround application from the Cluster object.

  • [9339] Fixed the issue with incorrect health monitoring for Kubernetes and MKE endpoints on OpenStack-based clusters.

  • [21710] Fixed the issue with a too high threshold being set for the KubeContainersCPUThrottlingHigh StackLight alert.

  • [22872] Removed the inefficient ElasticNoNewDataCluster and ElasticNoNewDataNode StackLight alerts.

  • [23853] Fixed the issue wherein the KaaSCephOperationRequest resource created to remove the failed node from the Ceph cluster was stuck with the Failed status and an error message in errorReason. The Failed status blocked the replacement of the failed master node on regional clusters of the bare metal and Equinix Metal providers.

  • [23841] Improved error logging for load balancers deletion:

    • The reason for the inability to delete an LB is now displayed in the provider logs.

    • If the search of an FIP associated with the LB deletion returns more than one FIP, the provider returns an error instead of deleting all found FIPs.

  • [18331] Fixed the issue with the Keycloak admin console menu disappearing on the Add identity provider page during configuration of an identity provider SAML.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.18.0 including the Cluster releases 11.2.0 and 7.8.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


MKE
[20651] A cluster deployment or update fails with not ready compose deployments

A managed cluster deployment, attachment, or update to a Cluster release with MKE versions 3.3.13, 3.4.6, 3.5.1, or earlier may fail with the compose pods flapping (ready > terminating > pending) and with the following error message appearing in logs:

'not ready: deployments: kube-system/compose got 0/0 replicas, kube-system/compose-api
 got 0/0 replicas'
 ready: false
 type: Kubernetes

Workaround:

  1. Disable Docker Content Trust (DCT):

    1. Access the MKE web UI as admin.

    2. Navigate to Admin > Admin Settings.

    3. In the left navigation pane, click Docker Content Trust and disable it.

  2. Restart the affected deployments such as calico-kube-controllers, compose, compose-api, coredns, and so on:

    kubectl -n kube-system delete deployment <deploymentName>
    

    Once done, the cluster deployment or update resumes.

  3. Re-enable DCT.



Bare metal
[24806] The dnsmasq parameters are not applied on multi-rack clusters

Fixed in 2.19.0

During bootstrap of a bare metal management cluster with a multi-rack topology, the dhcp-option=tag parameters are not applied to dnsmasq.conf.

Symptoms:

The dnasmq-controller service contains the following exemplary error message:

KUBECONFIG=kaas-mgmt-kubeconfig kubectl -n kaas logs --tail 50 deployment/dnsmasq -c dnsmasq-controller

...
I0622 09:05:26.898898       8 handler.go:19] Failed to watch Object, kind:'dnsmasq': failed to list *unstructured.Unstructured: the server could not find the requested resource
E0622 09:05:26.899108       8 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.8/tools/cache/reflector.go:167: Failed to watch *unstructured.Unstructured: failed to list *unstructured.Unstructured: the server could not find the requested resource
...

Workaround:

Manually update deployment/dnsmasq with the updated image:

KUBECONFIG=kaas-mgmt-kubeconfig kubectl -n kaas set image deployment/dnsmasq dnsmasq-controller=mirantis.azurecr.io/bm/dnsmasq-controller:base-focal-2-18-issue24806-20220618085127
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.

[20736] Region deletion failure after regional deployment failure

If a baremetal-based regional cluster deployment fails before pivoting is done, the corresponding region deletion fails.

Workaround:

Using the command below, manually delete all possible traces of the failed regional cluster deployment, including but not limited to the following objects that contain the kaas.mirantis.com/region label of the affected region:

  • cluster

  • machine

  • baremetalhost

  • baremetalhostprofile

  • l2template

  • subnet

  • ipamhost

  • ipaddr

kubectl delete <objectName> -l kaas.mirantis.com/region=<regionName>

Warning

Do not use the same region name again after the regional cluster deployment failure since some objects that reference the region name may still exist.



Equinix Metal
[16379,23865] Cluster update fails with the FailedMount warning

Fixed in 2.19.0

An Equinix-based management or managed cluster fails to update with the FailedAttachVolume and FailedMount warnings.

Workaround:

  1. Verify that the description of the pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    
    • <affectedProjectName> is the Container Cloud project name where the pods failed to run

    • <affectedPodName> is a pod name that failed to run in this project

    In the pod description, identify the node name where the pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the pod that fails to init to 0 replicas.

  4. On every csi-rbdplugin pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state is Running.



StackLight
[27732-1] OpenSearch PVC size custom settings are dismissed during deployment

Fixed in 11.6.0 and 12.7.0

The OpenSearch elasticsearch.persistentVolumeClaimSize custom setting is overwritten by logging.persistentVolumeClaimSize during deployment of a Container Cloud cluster of any type and is set to the default 30Gi.

Note

This issue does not block the OpenSearch cluster operations if the default retention time is set. The default setting is usually enough for the capacity size of this cluster.

The issue may affect the following Cluster releases:

  • 11.2.0 - 11.5.0

  • 7.8.0 - 7.11.0

  • 8.8.0 - 8.10.0, 12.5.0 (MOSK clusters)

  • 10.2.4 - 10.8.1 (attached MKE 3.4.x clusters)

  • 13.0.2 - 13.5.1 (attached MKE 3.5.x clusters)

To verify that the cluster is affected:

Note

In the commands below, substitute parameters enclosed in angle brackets to match the affected cluster values.

kubectl --kubeconfig=<managementClusterKubeconfigPath> \
-n <affectedClusterProjectName> \
get cluster <affectedClusterName> \
-o=jsonpath='{.spec.providerSpec.value.helmReleases[*].values.elasticsearch.persistentVolumeClaimSize}' | xargs echo config size:


kubectl --kubeconfig=<affectedClusterKubeconfigPath> \
-n stacklight get pvc -l 'app=opensearch-master' \
-o=jsonpath="{.items[*].status.capacity.storage}" | xargs echo capacity sizes:
  • The cluster is not affected if the configuration size value matches or is less than any capacity size. For example:

    config size: 30Gi
    capacity sizes: 30Gi 30Gi 30Gi
    
    config size: 50Gi
    capacity sizes: 100Gi 100Gi 100Gi
    
  • The cluster is affected if the configuration size is larger than any capacity size. For example:

    config size: 200Gi
    capacity sizes: 100Gi 100Gi 100Gi
    

Workaround for a new cluster creation:

  1. Select from the following options:

    • For a management or regional cluster, during the bootstrap procedure, open cluster.yaml.template for editing.

    • For a managed cluster, open the Cluster object for editing.

      Caution

      For a managed cluster, use the Container Cloud API instead of the web UI for cluster creation.

  2. In the opened .yaml file, add logging.persistentVolumeClaimSize along with elasticsearch.persistentVolumeClaimSize. For example:

    apiVersion: cluster.k8s.io/v1alpha1
    spec:
    ...
      providerSpec:
        value:
        ...
          helmReleases:
          - name: stacklight
            values:
              elasticsearch:
                persistentVolumeClaimSize: 100Gi
              logging:
                enabled: true
                persistentVolumeClaimSize: 100Gi
    
  3. Continue the cluster deployment. The system will use the custom value set in logging.persistentVolumeClaimSize.

    Caution

    If elasticsearch.persistentVolumeClaimSize is absent in the .yaml file, the Admission Controller blocks the configuration update.

Workaround for an existing cluster:

Caution

During the application of the below workarounds, a short outage of OpenSearch and its dependent components may occur with the following alerts firing on the cluster. This behavior is expected. Therefore, disregard these alerts.

StackLight alerts list firing during cluster update

Cluster size and outage probability level

Alert name

Label name and component

Any cluster with high probability

KubeStatefulSetOutage

statefulset=opensearch-master

KubeDeploymentOutage

  • deployment=opensearch-dashboards

  • deployment=metricbeat

Large cluster with average probability

KubePodsNotReady Removed in 17.0.0, 16.0.0, and 14.1.0

  • created_by_name="opensearch-master*"

  • created_by_name="opensearch-dashboards*"

  • created_by_name="metricbeat-*"

OpenSearchClusterStatusWarning

n/a

OpenSearchNumberOfPendingTasks

n/a

OpenSearchNumberOfInitializingShards

n/a

OpenSearchNumberOfUnassignedShards Removed in 2.27.0 (17.2.0 and 16.2.0)

n/a

Any cluster with low probability

KubeStatefulSetReplicasMismatch

statefulset=opensearch-master

KubeDeploymentReplicasMismatch

  • deployment=opensearch-dashboards

  • deployment=metricbeat

StackLight in HA mode with LVP provisioner for OpenSearch PVCs

Warning

After applying this workaround, the existing log data will be lost. Therefore, if required, migrate log data to a new persistent volume (PV).

  1. Move the existing log data to a new PV, if required.

  2. Increase the disk size for local volume provisioner (LVP).

  3. Scale down the opensearch-master StatefulSet with dependent resources to 0 and disable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 0 statefulset opensearch-master
    
    kubectl -n stacklight scale --replicas 0 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 0 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : true }}'
    
  4. Recreate the opensearch-master StatefulSet with the updated disk size.

    kubectl get statefulset opensearch-master -o yaml -n stacklight | sed 's/storage: 30Gi/storage: <pvcSize>/g' > opensearch-master.yaml
    
    kubectl -n stacklight delete statefulset opensearch-master
    
    kubectl create -f opensearch-master.yaml
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  5. Delete existing PVCs:

    kubectl delete pvc -l 'app=opensearch-master' -n stacklight
    

    Warning

    This command removes all existing logs data from PVCs.

  6. In the Cluster configuration, set the same logging.persistentVolumeClaimSize as the size of elasticsearch.persistentVolumeClaimSize. For example:

    apiVersion: cluster.k8s.io/v1alpha1
    kind: Cluster
    spec:
    ...
      providerSpec:
        value:
        ...
          helmReleases:
          - name: stacklight
            values:
              elasticsearch:
                persistentVolumeClaimSize: 100Gi
              logging:
                enabled: true
                persistentVolumeClaimSize: 100Gi
    
  7. Scale up the opensearch-master StatefulSet with dependent resources and enable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 3 statefulset opensearch-master
    
    sleep 100
    
    kubectl -n stacklight scale --replicas 1 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 1 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : false }}'
    
StackLight in non-HA mode with an expandable StorageClass for OpenSearch PVCs

Note

To verify whether a StorageClass is expandable:

kubectl -n stacklight get pvc | grep opensearch-master | awk '{print $6}' | xargs -I{} kubectl get storageclass {} -o yaml | grep 'allowVolumeExpansion: true'

A positive system response is allowVolumeExpansion: true. A negative system response is blank or false.

  1. Scale down the opensearch-master StatefulSet with dependent resources to 0 and disable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 0 statefulset opensearch-master
    
    kubectl -n stacklight scale --replicas 0 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 0 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : true }}'
    
  2. Recreate the opensearch-master StatefulSet with the updated disk size.

    kubectl -n stacklight get statefulset opensearch-master -o yaml | sed 's/storage: 30Gi/storage: <pvc_size>/g' > opensearch-master.yaml
    
    kubectl -n stacklight delete statefulset opensearch-master
    
    kubectl create -f opensearch-master.yaml
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  3. Patch the PVCs with the new elasticsearch.persistentVolumeClaimSize value:

    kubectl -n stacklight patch pvc opensearch-master-opensearch-master-0 -p  '{ "spec": { "resources": { "requests": { "storage": "<pvc_size>" }}}}'
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  4. In the Cluster configuration, set logging.persistentVolumeClaimSize the same as the size of elasticsearch.persistentVolumeClaimSize. For example:

     apiVersion: cluster.k8s.io/v1alpha1
     kind: Cluster
     spec:
     ...
       providerSpec:
         value:
         ...
           helmReleases:
           - name: stacklight
             values:
               elasticsearch:
                 persistentVolumeClaimSize: 100Gi
               logging:
                 enabled: true
                 persistentVolumeClaimSize: 100Gi
    
  5. Scale up the opensearch-master StatefulSet with dependent resources to 1 and enable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 1 statefulset opensearch-master
    
    sleep 100
    
    kubectl -n stacklight scale --replicas 1 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 1 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : false }}'
    
StackLight in non-HA mode with a non-expandable StorageClass and no LVP for OpenSearch PVCs

Warning

After applying this workaround, the existing log data will be lost. Depending on your custom provisioner, you may find a third-party tool, such as pv-migrate, that provides a possibility to copy all data from one PV to another.

If data loss is acceptable, proceed with the workaround below.

Note

To verify whether a StorageClass is expandable:

kubectl -n stacklight get pvc | grep opensearch-master | awk '{print $6}' | xargs -I{} kubectl get storageclass {} -o yaml | grep 'allowVolumeExpansion: true'

A positive system response is allowVolumeExpansion: true. A negative system response is blank or false.

  1. Scale down the opensearch-master StatefulSet with dependent resources to 0 and disable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 0 statefulset opensearch-master
    
    kubectl -n stacklight scale --replicas 0 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 0 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : true }}'
    
  2. Recreate the opensearch-master StatefulSet with the updated disk size:

    kubectl get statefulset opensearch-master -o yaml -n stacklight | sed 's/storage: 30Gi/storage: <<pvc_size>>/g' > opensearch-master.yaml
    
    kubectl -n stacklight delete statefulset opensearch-master
    
    kubectl create -f opensearch-master.yaml
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  3. Delete existing PVCs:

    kubectl delete pvc -l 'app=opensearch-master' -n stacklight
    

    Warning

    This command removes all existing logs data from PVCs.

  4. In the Cluster configuration, set logging.persistentVolumeClaimSize to the same value as the size of the elasticsearch.persistentVolumeClaimSize parameter. For example:

     apiVersion: cluster.k8s.io/v1alpha1
     kind: Cluster
     spec:
     ...
       providerSpec:
         value:
         ...
           helmReleases:
           - name: stacklight
             values:
               elasticsearch:
                 persistentVolumeClaimSize: 100Gi
               logging:
                 enabled: true
                 persistentVolumeClaimSize: 100Gi
    
  5. Scale up the opensearch-master StatefulSet with dependent resources to 1 and enable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 1 statefulset opensearch-master
    
    sleep 100
    
    kubectl -n stacklight scale --replicas 1 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 1 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : false }}'
    
[27732-2] Custom settings for ‘elasticsearch.logstashRetentionTime’ are dismissed

Fixed in 11.6.0 and 12.7.0

Custom settings for the deprecated elasticsearch.logstashRetentionTime parameter are overwritten by the default setting set to 1 day.

The issue may affect the following Cluster releases with enabled elasticsearch.logstashRetentionTime:

  • 11.2.0 - 11.5.0

  • 7.8.0 - 7.11.0

  • 8.8.0 - 8.10.0, 12.5.0 (MOSK clusters)

  • 10.2.4 - 10.8.1 (attached MKE 3.4.x clusters)

  • 13.0.2 - 13.5.1 (attached MKE 3.5.x clusters)

As a workaround, in the Cluster object, replace elasticsearch.logstashRetentionTime with elasticsearch.retentionTime that was implemented to replace the deprecated parameter. For example:

apiVersion: cluster.k8s.io/v1alpha1
kind: Cluster
spec:
  ...
  providerSpec:
    value:
    ...
      helmReleases:
      - name: stacklight
        values:
          elasticsearch:
            retentionTime:
              logstash: 10
              events: 10
              notifications: 10
          logging:
            enabled: true

For the StackLight configuration procedure and parameters description, refer to Configure StackLight.

[20876] StackLight pods get stuck with the ‘NodeAffinity failed’ error

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshoot StackLight.

On a managed cluster, the StackLight pods may get stuck with the Pod predicate NodeAffinity failed error in the pod status. The issue may occur if the StackLight node label was added to one machine and then removed from another one.

The issue does not affect the StackLight services, all required StackLight pods migrate successfully except extra pods that are created and stuck during pod migration.

As a workaround, remove the stuck pods:

kubectl --kubeconfig <managedClusterKubeconfig> -n stacklight delete pod <stuckPodName>

Upgrade
[24802] Container Cloud upgrade to 2.18.0 can trigger managed clusters update

Affects only Container Cloud 2.18.0

On clusters with enabled proxy and the NO_PROXY settings containing localhost/127.0.0.1 or matching the automatically added Container Cloud internal endpoints, the Container Cloud release upgrade from 2.17.0 to 2.18.0 triggers automatic update of managed clusters to the latest available Cluster releases in their respective series.

For the issue workaround, contact Mirantis support.

[21810] Upgrade to Cluster releases 5.22.0 and 7.5.0 may get stuck

Affects Ubuntu-based clusters deployed after Feb 10, 2022

If you deploy an Ubuntu-based cluster using the deprecated Cluster release 7.4.0 (and earlier) or 5.21.0 (and earlier) starting from February 10, 2022, the cluster update to the Cluster releases 7.5.0 and 5.22.0 may get stuck while applying the Deploy state to the cluster machines. The issue affects all cluster types: management, regional, and managed.

To verify that the cluster is affected:

  1. Log in to the Container Cloud web UI.

  2. In the Clusters tab, capture the RELEASE and AGE values of the required Ubuntu-based cluster. If the values match the ones from the issue description, the cluster may be affected.

  3. Using SSH, log in to the manager or worker node that got stuck while applying the Deploy state and identify the containerd package version:

    containerd --version
    

    If the version is 1.5.9, the cluster is affected.

  4. In /var/log/lcm/runners/<nodeName>/deploy/, verify whether the Ansible deployment logs contain the following errors that indicate that the cluster is affected:

    The following packages will be upgraded:
      docker-ee docker-ee-cli
    The following packages will be DOWNGRADED:
      containerd.io
    
    STDERR:
    E: Packages were downgraded and -y was used without --allow-downgrades.
    

Workaround:

Warning

Apply the steps below to the affected nodes one-by-one and only after each consecutive node gets stuck on the Deploy phase with the Ansible log errors. Such sequence ensures that each node is cordon-drained and Docker is properly stopped. Therefore, no workloads are affected.

  1. Using SSH, log in to the first affected node and install containerd 1.5.8:

    apt-get install containerd.io=1.5.8-1 -y --allow-downgrades --allow-change-held-packages
    
  2. Wait for Ansible to reconcile. The node should become Ready in several minutes.

  3. Wait for the next node of the cluster to get stuck on the Deploy phase with the Ansible log errors. Only after that, apply the steps above on the next node.

  4. Patch the remaining nodes one-by-one using the steps above.


Container Cloud web UI
[23002] Inability to set a custom value for a predefined node label

Fixed in 7.11.0, 11.5.0 and 12.5.0

During machine creation using the Container Cloud web UI, a custom value for a node label cannot be set.

As a workaround, manually add the value to spec.providerSpec.value.nodeLabels in machine.yaml.


[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.18.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.31.9

aws-credentials-controller

1.31.9

Azure Updated

azure-provider

1.31.9

azure-credentials-controller

1.31.9

Bare metal

ambassador

1.20.1-alpine

baremetal-operator Updated

6.1.9

baremetal-public-api Updated

6.1.9

baremetal-provider Updated

1.31.9

baremetal-resource-controller

base-focal-20220429170738

ironic Updated

xena-focal-20220513073431

ironic-operator Updated

base-focal-20220501190529

kaas-ipam

base-focal-20220310095439

keepalived

2.1.5

local-volume-provisioner

2.5.0-mcp

mariadb

10.4.17-bionic-20220113085105

IAM

iam Updated

2.4.25

iam-controller Updated

1.31.9

keycloak Updated

16.1.1

Container Cloud

admission-controller Updated

1.31.11

agent-controller Updated

1.31.9

byo-credentials-controller Updated

1.31.9

byo-provider Updated

1.31.9

ceph-kcc-controller Updated

1.31.9

cert-manager Updated

1.31.9

client-certificate-controller Updated

1.31.9

event-controller Updated

1.31.9

golang

1.17.6

kaas-public-api Updated

1.31.9

kaas-exporter Updated

1.31.9

kaas-ui Updated

1.31.12

lcm-controller Updated

0.3.0-239-gae7218ea

license-controller Updated

1.31.9

machinepool-controller Updated

1.31.9

mcc-cache Updated

1.31.9

portforward-controller Updated

1.31.9

proxy-controller Updated

1.31.9

rbac-controller Updated

1.31.9

release-controller Updated

1.31.9

rhellicense-controller Updated

1.31.9

scope-controller Updated

1.31.9

squid-proxy

0.0.1-6

user-controller Updated

1.31.9

Equinix Metal

equinix-provider Updated

1.31.9

equinix-credentials-controller Updated

1.31.9

keepalived

2.1.5

OpenStack Updated

openstack-provider

1.31.9

os-credentials-controller

1.31.9

VMware vSphere

vsphere-provider Updated

1.31.9

vsphere-credentials-controller Updated

1.31.9

keepalived

2.1.5

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.18.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-6.1.9.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-6.1.9.tgz

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-xena-focal-debug-20220512084815

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-xena-focal-debug-20220512084815

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-6.1.9.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-2.5.0-mcp.tgz

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-104-6e2e82c.tgz

target ubuntu system

https://binary.mirantis.com/bm/bin/efi/ubuntu/tgz-bionic-20210622161844

Docker images

ambassador

mirantis.azurecr.io/general/external/docker.io/library/nginx:1.20.1-alpine

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-focal-20220208045851

baremetal-resource-controller Updated

mirantis.azurecr.io/bm/baremetal-resource-controller:base-focal-20220429170738

dynamic_ipxe Updated

mirantis.azurecr.io/bm/dnsmasq/dynamic-ipxe:base-focal-20220429170829

dnsmasq Updated

mirantis.azurecr.io/general/dnsmasq:focal-20220429170747

ironic Updated

mirantis.azurecr.io/openstack/ironic:xena-focal-20220513073431

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:xena-focal-20220513073431

ironic-operator Updated

mirantis.azurecr.io/bm/ironic-operator:base-focal-20220501190529

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20210608113804

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-focal-20220310095439

mariadb

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20220113085105

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.14.0-1-g8725814

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-focal-20220128103433


Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.31.9.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.31.9.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.31.11.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.31.9.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.31.9.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.31.9.tgz

azure-credentials-controller

https://binary.mirantis.com/core/helm/azure-credentials-controller-1.31.9.tgz

azure-provider

https://binary.mirantis.com/core/helm/azure-provider-1.31.9.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.31.9.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.31.9.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.31.9.tgz

ceph-kcc-controller

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.31.9.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.31.9.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.31.9.tgz

equinix-credentials-controller

https://binary.mirantis.com/core/helm/equinix-credentials-controller-1.31.9.tgz

equinix-provider

https://binary.mirantis.com/core/helm/equinix-provider-1.31.9.tgz

equinixmetalv2-provider

https://binary.mirantis.com/core/helm/equinixmetalv2-provider-1.31.9.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.31.9.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.31.9.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.31.9.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.31.9.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.31.12.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.31.9.tgz

license-controller Updated

https://binary.mirantis.com/core/helm/license-controller-1.31.9.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.31.9.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.31.9.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.31.9.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.31.9.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.31.9.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.31.9.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.31.9.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.31.9.tgz

scope-controller

http://binary.mirantis.com/core/helm/scope-controller-1.31.9.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.31.9.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.31.9.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.31.9.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.31.9.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.31.11

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.31.9

aws-cluster-api-controller Updated

mirantis.azurecr.io/core/aws-cluster-api-controller:1.31.9

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.31.9

azure-cluster-api-controller Updated

mirantis.azurecr.io/core/azure-cluster-api-controller:1.31.9

azure-credentials-controller Updated

mirantis.azurecr.io/core/azure-credentials-controller:1.31.9

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.31.9

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.31.9

ceph-kcc-controller Updated

mirantis.azurecr.io/core/ceph-kcc-controller:v1.31.9

cert-manager-controller

mirantis.azurecr.io/core/external/cert-manager-controller:v1.6.1

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.31.9

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.31.9

cluster-api-provider-equinix Updated

mirantis.azurecr.io/core/cluster-api-provider-equinix:1.31.9

equinix-credentials-controller Updated

mirantis.azurecr.io/core/equinix-credentials-controller:1.31.9

frontend Updated

mirantis.azurecr.io/core/frontend:1.31.12

haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.12.0-8-g6fabf1c

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.31.9

kproxy Updated

mirantis.azurecr.io/lcm/kproxy:1.31.9

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:v0.3.0-239-gae7218ea

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.31.9

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.14.0-1-g8725814

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.31.9

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.31.9

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.31.9

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.31.9

registry

mirantis.azurecr.io/lcm/registry:2.7.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.31.9

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.31.9

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.31.9

squid-proxy

mirantis.azurecr.io/core/squid-proxy:0.0.1-6

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-api-controller:1.31.9

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.31.9

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.31.9


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

iamctl-linux Removed

n/a

iamctl-darwin Removed

n/a

iamctl-windows Removed

n/a

Helm charts

iam Updated

http://binary.mirantis.com/iam/helm/iam-2.4.25.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.12.tgz

keycloak_proxy Updated

http://binary.mirantis.com/core/helm/keycloak_proxy-1.31.9.tgz

Docker images

api Removed

n/a

auxiliary Removed

n/a

kubernetes-entrypoint

mirantis.azurecr.io/iam/external/kubernetes-entrypoint:v0.3.1

mariadb

mirantis.azurecr.io/general/mariadb:10.4.16-bionic-20201105025052

keycloak Updated

mirantis.azurecr.io/iam/keycloak:0.5.7

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-2

2.17.0

The Mirantis Container Cloud GA release 2.17.0:

  • Introduces support for the Cluster release 11.1.0 that is based on Mirantis Container Runtime 20.10.8 and Mirantis Kubernetes Engine 3.5.1 with Kubernetes 1.21.

  • Introduces support for the Cluster release 7.7.0 that is based on Mirantis Container Runtime 20.10.8 and Mirantis Kubernetes Engine 3.4.7 with Kubernetes 1.20.

  • Supports the Cluster release 8.6.0 that is based on the Cluster release 7.6.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 22.2.

  • Does not support greenfield deployments on deprecated Cluster releases 11.0.0, 8.5.0, and 7.6.0. Use the latest Cluster releases of the series instead.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.17.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.17.0. For the list of enhancements in the Cluster releases 11.1.0 and 7.7.0 that are introduced by the Container Cloud release 2.17.0, see the Cluster releases (managed).


General availability for Ubuntu 20.04 on greenfield deployments

Implemented full support for Ubuntu 20.04 LTS (Focal Fossa) as the default host operating system that now installs on management, regional, and managed clusters for the following cloud providers: AWS, Azure, OpenStack, Equinix Metal with public or private networking, and non-MOSK-based bare metal.

For the vSphere and MOSK-based (managed) deployments, support for Ubuntu 20.04 will be announced in one of the following Container Cloud releases.

Note

The management or regional bare metal cluster dedicated for managed clusters running MOSK is based on Ubuntu 20.04.

Caution

Upgrading from Ubuntu 18.04 to 20.04 on existing deployments is not supported.

Container Cloud on top of MOSK Victoria with Tungsten Fabric

Implemented the capability to deploy Container Cloud management, regional, and managed clusters based on OpenStack Victoria with Tungsten Fabric networking on top of Mirantis OpenStack for Kubernetes (MOSK) Victoria with Tungsten Fabric.

Note

On the MOSK Victoria with Tungsten Fabric clusters of Container Cloud deployed before MOSK 22.3, Octavia enables a default security group for newly created load balancers. To change this configuration, refer to MOSK Operations Guide: Configure load balancing. To use the default security group, configure ingress rules.

EBS instead of NVMe as persistent storage for AWS-based nodes

Replaced the Non-Volatile Memory Express (NVMe) drive type with the Amazon Elastic Block Store (EBS) one as the persistent storage requirement for AWS-based nodes. This change prevents cluster nodes from becoming unusable after instances are stopped and NVMe drives are erased.

Previously, the /var/lib/docker Docker data was located on local NVMe SSDs by default. Now, this data is located on the same EBS volume drive as the operating system.

Manager nodes deletion on all cluster types

TechPreview

Implemented the capability to delete manager nodes with the purpose of replacement or recovery. Consider the following precautions:

  • Create a new manager machine to replace the deleted one as soon as possible. This is necessary since after a machine removal, the cluster has limited capabilities to tolerate faults. Deletion of manager machines is intended only for replacement or recovery of failed nodes.

  • You can delete a manager machine only if your cluster has at least two manager machines in the Ready state.

  • Do not delete more than one manager machine at once to prevent cluster failure and data loss.

  • For MOSK-based clusters, after a manager machine deletion, proceed with additional manual steps described in Mirantis OpenStack for Kubernetes Operations Guide: Replace a failed controller node.

  • For the Equinix Metal and bare metal providers:

    • Ensure that the machine to delete is not a Ceph Monitor. If it is, migrate the Ceph Monitor to keep the odd number quorum of Ceph Monitors after the machine deletion. For details, see Migrate a Ceph Monitor before machine replacement.

    • If you delete a machine on the regional cluster, refer to the known issue 23853 to complete the deletion.

For the sake of HA, limited a managed cluster size to have only an odd number of manager machines. In an even-sized cluster, an additional machine remains in the Pending state until an extra manager machine is added.

Custom values for node labels

Extended the use of node labels for all supported cloud providers with the ability to set custom values. Especially from the MOSK standpoint, this feature makes it easy to schedule overrides for OpenStack services using API. For example, now you can set the node-type label to define the node purpose such as hpc-compute, compute-lvm, or storage-ssd in its value.

The list of allowed node labels is located in the Cluster object status providerStatus.releaseRef.current.allowedNodeLabels field. Before or after a machine deployment, add the required label from the allowed node labels list with the corresponding value to spec.providerSpec.value.nodeLabels in machine.yaml.

Note

Due to the known issue 23002, it is not possible to set a custom value for a predefined node label using the Container Cloud web UI. For a workaround, refer to the issue description.

Machine pools

Introduced the MachinePool custom resource. A machine pool is a template that allows managing a set of machines with the same provider spec as a single unit. You can create different sets of machine pools with required specs during machines creation on a new or existing cluster using the Create machine wizard in the Container Cloud web UI. You can assign or unassign machines from a pool, if required. You can also increase or decrease replicas count. In case of replicas count increasing, new machines will be added automatically.

Automatic propagation of Salesforce configuration to all clusters

Implemented the capability to enable automatic propagation of the Salesforce configuration of your management cluster to the related regional and managed clusters using the autoSyncSalesForceConfig=true flag added to the Cluster object of the management cluster. This option allows for automatic update and sync of the Salesforce settings on all your clusters after you update your management cluster configuration.

You can also set custom settings for regional and managed clusters that always override automatically propagated Salesforce values.

Note

The capability to enable this option using the Container Cloud web UI will be announced in one of the following releases.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.17.0 along with the Cluster releases 11.1.0 and 7.7.0:

  • Bare metal:

    • [22563] Fixed the issue wherein a deployment of a bare metal node with an LVM volume on top of a mdadm-based raid10 failed during provisioning due to insufficient cleanup of RAID devices.

  • Equinix Metal:

    • [22264] Fixed the issue wherein the KubeContainersCPUThrottlingHigh alerts for Equinix Metal and AWS deployments raised due to low default deployment limits set for Equinix Metal and AWS controller containers.

  • StackLight:

    • [23006] Fixed the issue that caused StackLight endpoints to crash on start with the private key does not match public key error message.

    • [22626] Fixed the issue that caused constant restarts of the kaas-exporter pod. Increased the memory for kaas-exporter requests and limits.

    • [22337] Improved the certificate expiration alerts by enhancing the alert severities.

    • [20856] Fixed the issue wherein variables values in the PostgreSQL Grafana dashboard were not calculated.

    • [20855] Fixed the issue wherein the Cluster > Health panel showed N/A in the Elasticsearch Grafana dashboard.

  • Ceph:

  • LCM:

    • [22341] Fixed the issue wherein the cordon-drain states were not removed after unsetting the maintenance mode for a machine.

  • Cluster health:

    • [21494] Fixed the issue wherein controller pods were killed by OOM after a successful deployment of a management or regional cluster.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.17.0 including the Cluster releases 11.1.0 and 7.7.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


MKE
[20651] A cluster deployment or update fails with not ready compose deployments

A managed cluster deployment, attachment, or update to a Cluster release with MKE versions 3.3.13, 3.4.6, 3.5.1, or earlier may fail with the compose pods flapping (ready > terminating > pending) and with the following error message appearing in logs:

'not ready: deployments: kube-system/compose got 0/0 replicas, kube-system/compose-api
 got 0/0 replicas'
 ready: false
 type: Kubernetes

Workaround:

  1. Disable Docker Content Trust (DCT):

    1. Access the MKE web UI as admin.

    2. Navigate to Admin > Admin Settings.

    3. In the left navigation pane, click Docker Content Trust and disable it.

  2. Restart the affected deployments such as calico-kube-controllers, compose, compose-api, coredns, and so on:

    kubectl -n kube-system delete deployment <deploymentName>
    

    Once done, the cluster deployment or update resumes.

  3. Re-enable DCT.



Bare metal
[20736] Region deletion failure after regional deployment failure

If a baremetal-based regional cluster deployment fails before pivoting is done, the corresponding region deletion fails.

Workaround:

Using the command below, manually delete all possible traces of the failed regional cluster deployment, including but not limited to the following objects that contain the kaas.mirantis.com/region label of the affected region:

  • cluster

  • machine

  • baremetalhost

  • baremetalhostprofile

  • l2template

  • subnet

  • ipamhost

  • ipaddr

kubectl delete <objectName> -l kaas.mirantis.com/region=<regionName>

Warning

Do not use the same region name again after the regional cluster deployment failure since some objects that reference the region name may still exist.



Equinix Metal
[16379,23865] Cluster update fails with the FailedMount warning

Fixed in 2.19.0

An Equinix-based management or managed cluster fails to update with the FailedAttachVolume and FailedMount warnings.

Workaround:

  1. Verify that the description of the pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    
    • <affectedProjectName> is the Container Cloud project name where the pods failed to run

    • <affectedPodName> is a pod name that failed to run in this project

    In the pod description, identify the node name where the pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the pod that fails to init to 0 replicas.

  4. On every csi-rbdplugin pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state is Running.



IAM
[18331] Keycloak admin console menu disappears on ‘Add identity provider’ page

Fixed in 2.18.0

During configuration of an identity provider SAML using the Add identity provider menu of the Keycloak admin console, the page style breaks as well as the Save and Cancel buttons disappear.

Workaround:

  1. Log in to the Keycloak admin console.

  2. In the sidebar menu, switch to the Master realm.

  3. Navigate to Realm Settings > Themes.

  4. In the Admin Console Theme drop-down menu, select keycloak.

  5. Click Save and refresh the browser window to apply the changes.


LCM
[23853] Replacement of a regional master node fails on bare metal and Equinix Metal

Fixed in 2.18.0

During replacement of a failed master node on regional clusters of the bare metal and Equinix Metal providers, the KaaSCephOperationRequest resource created to remove the failed node from the Ceph cluster is stuck with the Failed status and an error message in errorReason. For example:

status:
  removeStatus:
    osdRemoveStatus:
      errorReason: Timeout (30m0s) reached for waiting pg rebalance for osd 2
      status: Failed

The Failed status blocks the replacement of the failed master node.

Workaround:

  1. On the management cluster, obtain metadata.name, metadata.namespace, and the spec section of KaaSCephOperationRequest being stuck:

    kubectl get kaascephoperationrequest <kcorName> -o yaml
    

    Replace <kcorName> with the name of KaaSCephOperationRequest that has the Failed status.

  2. Create a new KaaSCephOperationRequest template and save it as .yaml. For example, kcor-stuck-regional.yaml.

    apiVersion: kaas.mirantis.com/v1alpha1
    kind: KaaSCephOperationRequest
    metadata:
      name: <newKcorName>
      namespace: <kcorNamespace>
    spec: <kcorSpec>
    
    • <newKcorName>

      Name of new KaaSCephOperationRequest that differs from the failed one. Usually a failed KaaSCephOperationRequest resource is called delete-request-for-<masterMachineName>. Therefore, you can name the new resource as delete-request-for-<masterMachineName>-new.

    • <kcorNamespace>

      Namespace of the failed KaaSCephOperationRequest resource.

    • <kcorSpec>

      Spec of the failed KaaSCephOperationRequest resource.

  3. Apply the created template to the management cluster. For example:

    kubectl apply -f kcor-stuck-regional.yaml
    
  4. Remove the failed KaaSCephOperationRequest resource from the management cluster:

    kubectl delete kaascephoperationrequest <kcorName>
    

    Replace <kcorName> with the name of KaaSCephOperationRequest that has the Failed status.


StackLight
[20876] StackLight pods get stuck with the ‘NodeAffinity failed’ error

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshoot StackLight.

On a managed cluster, the StackLight pods may get stuck with the Pod predicate NodeAffinity failed error in the pod status. The issue may occur if the StackLight node label was added to one machine and then removed from another one.

The issue does not affect the StackLight services, all required StackLight pods migrate successfully except extra pods that are created and stuck during pod migration.

As a workaround, remove the stuck pods:

kubectl --kubeconfig <managedClusterKubeconfig> -n stacklight delete pod <stuckPodName>

Upgrade
[21810] Upgrade to Cluster releases 5.22.0 and 7.5.0 may get stuck

Affects Ubuntu-based clusters deployed after Feb 10, 2022

If you deploy an Ubuntu-based cluster using the deprecated Cluster release 7.4.0 (and earlier) or 5.21.0 (and earlier) starting from February 10, 2022, the cluster update to the Cluster releases 7.5.0 and 5.22.0 may get stuck while applying the Deploy state to the cluster machines. The issue affects all cluster types: management, regional, and managed.

To verify that the cluster is affected:

  1. Log in to the Container Cloud web UI.

  2. In the Clusters tab, capture the RELEASE and AGE values of the required Ubuntu-based cluster. If the values match the ones from the issue description, the cluster may be affected.

  3. Using SSH, log in to the manager or worker node that got stuck while applying the Deploy state and identify the containerd package version:

    containerd --version
    

    If the version is 1.5.9, the cluster is affected.

  4. In /var/log/lcm/runners/<nodeName>/deploy/, verify whether the Ansible deployment logs contain the following errors that indicate that the cluster is affected:

    The following packages will be upgraded:
      docker-ee docker-ee-cli
    The following packages will be DOWNGRADED:
      containerd.io
    
    STDERR:
    E: Packages were downgraded and -y was used without --allow-downgrades.
    

Workaround:

Warning

Apply the steps below to the affected nodes one-by-one and only after each consecutive node gets stuck on the Deploy phase with the Ansible log errors. Such sequence ensures that each node is cordon-drained and Docker is properly stopped. Therefore, no workloads are affected.

  1. Using SSH, log in to the first affected node and install containerd 1.5.8:

    apt-get install containerd.io=1.5.8-1 -y --allow-downgrades --allow-change-held-packages
    
  2. Wait for Ansible to reconcile. The node should become Ready in several minutes.

  3. Wait for the next node of the cluster to get stuck on the Deploy phase with the Ansible log errors. Only after that, apply the steps above on the next node.

  4. Patch the remaining nodes one-by-one using the steps above.


Container Cloud web UI
[24075] Ubuntu 20.04 does not display for AWS and Equinix Metal managed clusters

Fixed in 2.18.0

During creation of a machine for AWS or Equinix Metal provider with public networking, the Ubuntu 20.04 option does not display in the drop-down list of operating systems in the Container Cloud UI. Only Ubuntu 18.04 displays in the list.

Workaround:

  1. Identify the parent management or regional cluster of the affected managed cluster located in the same region.

    For example, if the affected managed cluster was deployed in region-one, identify its parent cluster by running:

    kubectl --kubeconfig <pathToManagementClusterKubeconfig> -n default get cluster -l kaas.mirantis.com/region=region-one
    

    Replace region-one with the corresponding value.

    Example of system response:

    NAME           AGE
    test-cluster   19d
    
  2. Modify the related management or regional Cluster object with the correct values for the credentials-controller Helm releases:

    kubectl --kubeconfig <pathToManagementClusterKubeconfig> -n default edit cluster <managementOrRegionalClusterName>
    

    In the system response, the editor displays the current state of the cluster. Find the spec.providerSpec.value.kaas.regional section.

    Example of the regional section in the Cluster object:

    spec:
      providerSpec:
        value:
          kaas:
            regional:
            - provider: aws
              helmReleases:
              - name: aws-credentials-controller
                values:
                  region: region-one
                  ...
            - provider: equinixmetal
              ...
    
  3. For the aws and equinixmetal providers (if available), modify the credentials-controller values as follows:

    Warning

    Do not overwrite existing values. For example, if one of Helm releases already has region: region-one, do not modify or remove it.

    • For aws-credentials-controller:

      values:
        config:
          allowedAMIs:
          -
            - name: name
              values:
                - "ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20211129"
            - name: owner-id
              values:
               - "099720109477"
      
    • For equinixmetal-credentials-controller:

      values:
        config:
          allowedOperatingSystems:
          - distro: ubuntu
            version: 20.04
      

    If the aws-credentials-controller or equinixmetal-credentials-controller Helm releases are missing in the spec.providerSpec.value.kaas.regional section or the helmReleases array is missing for the corresponding provider, add the releases with the overwritten values.

    Example of the helmReleases array for AWS:

    - provider: aws
      helmReleases:
      - name: aws-credentials-controller
        values:
          config:
            allowedAMIs:
            -
              - name: name
                values:
                  - "ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20211129"
              - name: owner-id
                values:
                 - "099720109477"
       ...
    

    Example of the helmReleases array for Equinix Metal:

    - provider: equinixmetal
      helmReleases:
      - name: equinixmetal-credentials-controller
        values:
          config:
            allowedOperatingSystems:
            - distro: ubuntu
              version: 20.04
    
  4. Wait for approximately 2 minutes for the AWS and/or Equinix credentials-controller to be restarted.

  5. Log out and log in again to the Container Cloud web UI.

  6. Restart the machine addition procedure.

Warning

After Container Cloud is upgraded to 2.18.0, remove the values added during the workaround application from the Cluster object.

[23002] Inability to set a custom value for a predefined node label

Fixed in 7.11.0, 11.5.0 and 12.5.0

During machine creation using the Container Cloud web UI, a custom value for a node label cannot be set.

As a workaround, manually add the value to spec.providerSpec.value.nodeLabels in machine.yaml.


[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.17.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.30.6

aws-credentials-controller

1.30.6

Azure Updated

azure-provider

1.30.6

azure-credentials-controller

1.30.6

Bare metal

ambassador

1.20.1-alpine

baremetal-operator Updated

6.1.4

baremetal-public-api Updated

6.1.4

baremetal-provider Updated

1.30.6

baremetal-resource-controller

base-focal-20220128182941

ironic Updated

victoria-bionic-20220328060019

ironic-operator Updated

base-focal-20220310095139

kaas-ipam Updated

base-focal-20220310095439

keepalived

2.1.5

local-volume-provisioner

2.5.0-mcp

mariadb

10.4.17-bionic-20220113085105

IAM

iam

2.4.14

iam-controller Updated

1.30.6

keycloak

15.0.2

Container Cloud

admission-controller Updated

1.30.6

agent-controller Updated

1.30.6

byo-credentials-controller Updated

1.30.6

byo-provider Updated

1.30.6

ceph-kcc-controller New

1.30.6

cert-manager Updated

1.30.6

client-certificate-controller Updated

1.30.6

event-controller Updated

1.30.6

golang

1.17.6

kaas-public-api Updated

1.30.6

kaas-exporter Updated

1.30.6

kaas-ui Updated

1.30.9

lcm-controller Updated

0.3.0-230-gdc7efe1c

license-controller Updated

1.30.6

machinepool-controller New

1.30.6

mcc-cache Updated

1.30.6

portforward-controller Updated

1.30.6

proxy-controller Updated

1.30.6

rbac-controller Updated

1.30.6

release-controller Updated

1.30.8

rhellicense-controller Updated

1.30.6

scope-controller Updated

1.30.6

squid-proxy

0.0.1-6

user-controller Updated

1.30.6

Equinix Metal

equinix-provider Updated

1.30.6

equinix-credentials-controller Updated

1.30.6

keepalived

2.1.5

OpenStack Updated

openstack-provider

1.30.6

os-credentials-controller

1.30.6

VMware vSphere

vsphere-provider Updated

1.30.6

vsphere-credentials-controller Updated

1.30.6

keepalived

2.1.5

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.17.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-6.1.4.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-6.1.4.tgz

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-victoria-focal-debug-20220208120746

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-victoria-focal-debug-20220208120746

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-6.1.4.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-2.5.0-mcp.tgz

provisioning_ansible Updated

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-104-6e2e82c.tgz

target ubuntu system

https://binary.mirantis.com/bm/bin/efi/ubuntu/tgz-bionic-20210622161844

Docker images

ambassador

mirantis.azurecr.io/general/external/docker.io/library/nginx:1.20.1-alpine

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-focal-20220208045851

baremetal-resource-controller

mirantis.azurecr.io/bm/baremetal-resource-controller:base-focal-20220128182941

dynamic_ipxe Updated

mirantis.azurecr.io/bm/dnsmasq/dynamic-ipxe:base-focal-20220310100410

dnsmasq

mirantis.azurecr.io/general/dnsmasq:focal-20210617094827

ironic Updated

mirantis.azurecr.io/openstack/ironic:victoria-bionic-20220328060019

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:victoria-bionic-20220328060019

ironic-operator Updated

mirantis.azurecr.io/bm/ironic-operator:base-focal-20220310095139

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20210608113804

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-focal-20220310095439

mariadb

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20220113085105

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.14.0-1-g8725814

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-focal-20220128103433


Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.30.6.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.30.6.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.30.6.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.30.6.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.30.6.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.30.6.tgz

azure-credentials-controller

https://binary.mirantis.com/core/helm/azure-credentials-controller-1.30.6.tgz

azure-provider

https://binary.mirantis.com/core/helm/azure-provider-1.30.6.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.30.6.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.30.6.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.30.6.tgz

ceph-kcc-controller New

https://binary.mirantis.com/core/helm/ceph-kcc-controller-1.30.6.tgz

cert-manager

https://binary.mirantis.com/core/helm/cert-manager-1.30.6.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.30.6.tgz

equinix-credentials-controller

https://binary.mirantis.com/core/helm/equinix-credentials-controller-1.30.6.tgz

equinix-provider

https://binary.mirantis.com/core/helm/equinix-provider-1.30.6.tgz

equinixmetalv2-provider

https://binary.mirantis.com/core/helm/equinixmetalv2-provider-1.30.6.tgz

event-controller

https://binary.mirantis.com/core/helm/event-controller-1.30.6.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.30.6.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.30.6.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.30.6.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.30.6.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.30.6.tgz

license-controller Updated

https://binary.mirantis.com/core/helm/license-controller-1.30.6.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.30.6.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.30.6.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.30.6.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.30.6.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.30.6.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.30.6.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.30.8.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.30.6.tgz

scope-controller

http://binary.mirantis.com/core/helm/scope-controller-1.30.6.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.30.6.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.30.6.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.30.6.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.30.6.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.30.6

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.30.6

aws-cluster-api-controller Updated

mirantis.azurecr.io/core/aws-cluster-api-controller:1.30.6

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.30.6

azure-cluster-api-controller Updated

mirantis.azurecr.io/core/azure-cluster-api-controller:1.30.6

azure-credentials-controller Updated

mirantis.azurecr.io/core/azure-credentials-controller:1.30.6

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.30.6

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.30.6

ceph-kcc-controller New

mirantis.azurecr.io/core/ceph-kcc-controller:v1.30.6

cert-manager-controller Updated

mirantis.azurecr.io/core/external/cert-manager-controller:v1.6.1

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.30.6

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.30.6

cluster-api-provider-equinix Updated

mirantis.azurecr.io/core/cluster-api-provider-equinix:1.30.6

equinix-credentials-controller Updated

mirantis.azurecr.io/core/equinix-credentials-controller:1.30.6

frontend Updated

mirantis.azurecr.io/core/frontend:1.30.6

haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.12.0-8-g6fabf1c

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.30.6

kproxy Updated

mirantis.azurecr.io/lcm/kproxy:1.30.6

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:v0.3.0-230-gdc7efe1c

license-controller Updated

mirantis.azurecr.io/core/license-controller:1.30.6

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.14.0-1-g8725814

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.30.6

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.30.6

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.30.6

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.30.6

registry

mirantis.azurecr.io/lcm/registry:2.7.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.30.8

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.30.6

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.30.6

squid-proxy

mirantis.azurecr.io/core/squid-proxy:0.0.1-6

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-api-controller:1.30.6

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.30.6

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.30.6


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

iamctl-linux Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.5-linux

iamctl-darwin Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.5-darwin

iamctl-windows Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.5-windows

Helm charts

iam

http://binary.mirantis.com/iam/helm/iam-2.4.14.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.12.tgz

keycloak_proxy Updated

http://binary.mirantis.com/core/helm/keycloak_proxy-1.30.9.tgz

Docker images

api Deprecated

mirantis.azurecr.io/iam/api:0.5.5

auxiliary

mirantis.azurecr.io/iam/auxiliary:0.5.5

kubernetes-entrypoint

mirantis.azurecr.io/iam/external/kubernetes-entrypoint:v0.3.1

mariadb

mirantis.azurecr.io/general/mariadb:10.4.16-bionic-20201105025052

keycloak

mirantis.azurecr.io/iam/keycloak:0.5.4

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-2

2.16.1

The Mirantis Container Cloud GA release 2.16.1 is based on 2.16.0 and:

  • Introduces support for the Cluster release 8.6.0 that is based on the Cluster release 7.6.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 22.2. This Cluster release is based on the updated version of Mirantis Kubernetes Engine 3.4.7 with Kubernetes 1.20 and Mirantis Container Runtime 20.10.8.

  • Supports the latest Cluster releases 7.6.0 and 11.0.0.

  • Does not support new deployments based on the deprecated Cluster releases 8.5.0, 7.5.0, 6.20.0, and 5.22.0 that were deprecated in 2.16.0.

For details about the Container Cloud release 2.16.1, refer to its parent release 2.16.0:

Caution

Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

2.16.0

The Mirantis Container Cloud GA release 2.16.0:

  • Introduces support for the Cluster release 11.0.0 for managed clusters that is based on Mirantis Container Runtime 20.10.8 and the updated version of Mirantis Kubernetes Engine 3.5.1 with Kubernetes 1.21.

  • Introduces support for the Cluster release 7.6.0 for all types of clusters that is based on Mirantis Container Runtime 20.10.8 and the updated version of Mirantis Kubernetes Engine 3.4.7 with Kubernetes 1.20.

  • Supports the Cluster release 8.5.0 that is based on the Cluster release 7.5.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 22.1.

  • Does not support greenfield deployments on deprecated Cluster releases 7.5.0, 6.20.0, and 5.22.0. Use the latest Cluster releases of the series instead.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.16.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.16.0. For the list of enhancements in the Cluster releases 11.0.0 and 7.6.0 that are introduced by the Container Cloud release 2.16.0, see the Cluster releases (managed).


License management using the Container Cloud web UI

Implemented a mechanism for the Container Cloud and MKE license update using the Container Cloud web UI. During the automatic license update, machines are not cordoned and drained and user workloads are not interrupted for all clusters starting from Cluster releases 7.6.0, 8.6.0, and 11.0.0. Therefore, after your management cluster upgrades to Container Cloud 2.16.0, make sure to update your managed clusters to the latest available Cluster releases.

Caution

Only the Container Cloud web UI users with the m:kaas@global-admin role can update the Container Cloud license.

Scheduling of a management cluster upgrade using web UI

TechPreview

Implemented initial Technology Preview support for management cluster upgrade scheduling through the Container Cloud web UI. Also, added full support for management cluster upgrade scheduling through CLI.

Automatic renewal of internal TLS certificates

Implemented automatic renewal of self-signed TLS certificates for internal Container Cloud services that are generated and managed by the Container Cloud provider.

Note

Custom certificates still require manual renewal. If applicable, the information about expiring custom certificates is available in the Container Cloud web UI.

Ubuntu 20.04 for greenfield bare metal managed clusters

TechPreview

Implemented initial Technology Preview support for Ubuntu 20.04 (Focal Fossa) on bare metal non-MOSK-based greenfield deployments of managed clusters. Now, you can optionally deploy Kubernetes machines with Ubuntu 20.04 on bare metal hosts. By default, Ubuntu 18.04 is used.

Caution

Upgrading to Ubuntu 20.04 on existing deployments initially created before Container Cloud 2.16.0 is not supported.

Note

Support for Ubuntu 20.04 on MOSK-based Cluster releases will be added in one of the following Container Cloud releases.

Additional regional cluster on bare metal

Extended the regional clusters support by implementing the ability to deploy an additional regional cluster on bare metal. This provides an ability to create baremetal-based managed clusters in bare metal regions in parallel with managed clusters of other private-based regional clusters within a single Container Cloud deployment.

MOSK on local RAID devices

TechPreview

Implemented the initial Technology Preview support for Mirantis OpenStack for Kubernetes (MOSK) deployment on local software-based Redundant Array of Independent Disks (RAID) devices to withstand failure of one device at a time. The feature is available in the Cluster release 8.5.0 after the Container Cloud upgrade to 2.16.0.

Using a custom bare metal host profile, you can configure and create an mdadm-based software RAID device of type raid10 if you have an even number of devices available on your servers. At least four storage devices are required for such RAID device.

Any interface name for bare metal LCM network

Implemented the ability to use any interface name instead of the k8s-lcm bridge for the LCM network traffic on a bare metal cluster. The Subnet objects for the LCM network must have the ipam/SVC-k8s-lcm label. For details, see Service labels and their life cycle.

Keepalived for built-in load balancing in standalone containers

For the Container Cloud managed clusters that are based on vSphere, Equinix Metal, or bare metal, moved Keepalived for the built-in load balancer to run in standalone Docker containers managed by systemd as a service. This change ensures version consistency of crucial infrastructure services and reduces dependency on a host operating system version and configuration.

Reworked ‘Reconfigure’ phase of LCMMachine

Reworked the Reconfigure phase applicable to LCMMachine that now can apply to all nodes. This phase runs after the Deploy phase to apply stateItems that relate to this phase without affecting workloads running on the machine.

Learn more

LCM Controller

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.16.0 along with the Cluster releases 11.0.0 and 7.6.0:

  • Bare metal:

    • [15989] Fixed the issue wherein removal of a bare metal-based management cluster failed with a timeout.

    • [20189] Fixed the issue with the Container Cloud web UI reporting a successful upgrade of a baremetal-based management cluster while running the previous release.

  • OpenStack:

    • [20992] Fixed the issue that caused inability to deploy an OpenStack-based managed cluster if DVR was enabled.

    • [20549] Fixed the CVE-2021-3520 security vulnerability in the cinder-csi-plugin image Docker image.

  • Equinix Metal:

    • [20467] Fixed the issue that caused deployment of an Equinix Metal based management cluster with private networking to fail with the following error message during the Ironic deployment:

      0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
      
    • [21324] Fixed the issue wherein the bare metal host was trying to configure an Equinix node as UEFI even for nodes with UEFI disabled.

    • [21326] Fixed the issue wherein the Ironic agent could not properly determine which disk will be the first disk on the node. As a result, some Equinix servers failed to boot from the proper disk.

    • [21338] Fixed the issue wherein some Equinix servers were configured in BIOS to always boot from PXE, which caused the operation system to fail to start from disk after provisioning.

  • StackLight:

    • [21646] Adjusted the kaas-exporter resource requests and limits to avoid issues with the kaas-exporter container being occassionally throttled and OOMKilled, preventing the Container Cloud metrics gathering.

    • [20591] Adjusted the RAM usage limit and disabled indices monitoring for prometheus-es-exporter to avoid prometheus-es-exporter pod crash looping due to low memory issues.

    • [17493] Fixed the following security vulnerabilities in the fluentd and spilo Docker images:

  • Ceph:

    • [20745] Fixed the issue wherein namespace deletion failed after removal of a managed cluster.

    • [7073] Fixed the issue with inability to automatically remove a Ceph node when removing a worker node.

  • IAM:

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.16.0 including the Cluster releases 11.0.0 and 7.6.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


MKE
[20651] A cluster deployment or update fails with not ready compose deployments

A managed cluster deployment, attachment, or update to a Cluster release with MKE versions 3.3.13, 3.4.6, 3.5.1, or earlier may fail with the compose pods flapping (ready > terminating > pending) and with the following error message appearing in logs:

'not ready: deployments: kube-system/compose got 0/0 replicas, kube-system/compose-api
 got 0/0 replicas'
 ready: false
 type: Kubernetes

Workaround:

  1. Disable Docker Content Trust (DCT):

    1. Access the MKE web UI as admin.

    2. Navigate to Admin > Admin Settings.

    3. In the left navigation pane, click Docker Content Trust and disable it.

  2. Restart the affected deployments such as calico-kube-controllers, compose, compose-api, coredns, and so on:

    kubectl -n kube-system delete deployment <deploymentName>
    

    Once done, the cluster deployment or update resumes.

  3. Re-enable DCT.



Equinix Metal
[22264] KubeContainersCPUThrottlingHigh alerts for Equinix and AWS deployments

Fixed in 2.17.0

The default deployment limits for Equinix and AWS controller containers set to 400m may be lower than the consumed amount of resources leading to KubeContainersCPUThrottlingHigh alerts in StackLight.

As a workaround, increase the default resource limits for the affected equinix-controllers or aws-controllers to 700m. For example:

kubectl edit deployment -n kaas aws-controllers
spec:
...
  resources:
    limits:
      cpu: 700m
      ...
[16379,23865] Cluster update fails with the FailedMount warning

Fixed in 2.19.0

An Equinix-based management or managed cluster fails to update with the FailedAttachVolume and FailedMount warnings.

Workaround:

  1. Verify that the description of the pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    
    • <affectedProjectName> is the Container Cloud project name where the pods failed to run

    • <affectedPodName> is a pod name that failed to run in this project

    In the pod description, identify the node name where the pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the pod that fails to init to 0 replicas.

  4. On every csi-rbdplugin pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state is Running.



Bare metal
[20736] Region deletion failure after regional deployment failure

If a baremetal-based regional cluster deployment fails before pivoting is done, the corresponding region deletion fails.

Workaround:

Using the command below, manually delete all possible traces of the failed regional cluster deployment, including but not limited to the following objects that contain the kaas.mirantis.com/region label of the affected region:

  • cluster

  • machine

  • baremetalhost

  • baremetalhostprofile

  • l2template

  • subnet

  • ipamhost

  • ipaddr

kubectl delete <objectName> -l kaas.mirantis.com/region=<regionName>

Warning

Do not use the same region name again after the regional cluster deployment failure since some objects that reference the region name may still exist.


[22563] Failure to deploy a bare metal node with RAID 1

Fixed in 2.17.0

Deployment of a bare metal node with an mdadm-based raid10 with LVM enabled fails during provisioning due to insufficient cleanup of RAID devices.

Workaround:

  1. Boot the affected node from any LiveCD, preferably Ubuntu.

  2. Obtain details about the mdadm RAID devices:

    sudo mdadm --detail --scan --verbose
    
  3. Stop all mdadm RAID devices listed in the output of the above command. For example:

    sudo mdadm --stop /dev/md0
    
  4. Clean up the metadata on partitions with the mdadm RAID device(s) enabled. For example:

    sudo mdadm --zero-superblock /dev/sda1
    

    In the above example, replace /dev/sda1 with partitions listed in the output of the command provided in the step 2.


[17792] Full preflight fails with a timeout waiting for BareMetalHost

If you run bootstrap.sh preflight with KAAS_BM_FULL_PREFLIGHT=true, the script fails with the following message:

preflight check failed: preflight full check failed: \
error waiting for BareMetalHosts to power on: \
timed out waiting for the condition

Workaround:

  1. Unset full preflight using the unset KAAS_BM_FULL_PREFLIGHT environment variable.

  2. Rerun bootstrap.sh preflight that executes fast preflight instead.


IAM
[18331] Keycloak admin console menu disappears on ‘Add identity provider’ page

Fixed in 2.18.0

During configuration of an identity provider SAML using the Add identity provider menu of the Keycloak admin console, the page style breaks as well as the Save and Cancel buttons disappear.

Workaround:

  1. Log in to the Keycloak admin console.

  2. In the sidebar menu, switch to the Master realm.

  3. Navigate to Realm Settings > Themes.

  4. In the Admin Console Theme drop-down menu, select keycloak.

  5. Click Save and refresh the browser window to apply the changes.


StackLight
[20876] StackLight pods get stuck with the ‘NodeAffinity failed’ error

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshoot StackLight.

On a managed cluster, the StackLight pods may get stuck with the Pod predicate NodeAffinity failed error in the pod status. The issue may occur if the StackLight node label was added to one machine and then removed from another one.

The issue does not affect the StackLight services, all required StackLight pods migrate successfully except extra pods that are created and stuck during pod migration.

As a workaround, remove the stuck pods:

kubectl --kubeconfig <managedClusterKubeconfig> -n stacklight delete pod <stuckPodName>
[23006] StackLight endpoint crashes on start: private key does not match public key

Fixed in 2.17.0

In rare cases, StackLight applications may receive the wrong TLS certificates, which prevents them to start correctly.

As a workaround, delete the old secret for the affected StackLight component. For example, for iam-proxy-alerta:

kubectl -n stacklight delete secret iam-proxy-alerta-tls-certs


LCM
[22341] The cordon-drain states are not removed after maintenance mode is unset

Fixed in 2.17.0

The cordon-drain states are not removed after the maintenance mode is unset for a machine. This issue may occur due to the maintenance transition being stuck on the NodeWorkloadLock object.

Workaround:

Select from the following options:

  • Disable the maintenance mode on the affected cluster as described in Enable cluster and machine maintenance mode.

  • Edit LCMClusterState in the spec section by setting value to "false":

    kubectl edit lcmclusterstates -n <projectName> <LCMCLusterStateName>
    
    apiVersion: lcm.mirantis.com/v1alpha1
    kind: LCMClusterState
    metadata:
      ...
    spec:
      ...
      value: "false"
    

Upgrade
[21810] Upgrade to Cluster releases 5.22.0 and 7.5.0 may get stuck

Affects Ubuntu-based clusters deployed after Feb 10, 2022

If you deploy an Ubuntu-based cluster using the deprecated Cluster release 7.4.0 (and earlier) or 5.21.0 (and earlier) starting from February 10, 2022, the cluster update to the Cluster releases 7.5.0 and 5.22.0 may get stuck while applying the Deploy state to the cluster machines. The issue affects all cluster types: management, regional, and managed.

To verify that the cluster is affected:

  1. Log in to the Container Cloud web UI.

  2. In the Clusters tab, capture the RELEASE and AGE values of the required Ubuntu-based cluster. If the values match the ones from the issue description, the cluster may be affected.

  3. Using SSH, log in to the manager or worker node that got stuck while applying the Deploy state and identify the containerd package version:

    containerd --version
    

    If the version is 1.5.9, the cluster is affected.

  4. In /var/log/lcm/runners/<nodeName>/deploy/, verify whether the Ansible deployment logs contain the following errors that indicate that the cluster is affected:

    The following packages will be upgraded:
      docker-ee docker-ee-cli
    The following packages will be DOWNGRADED:
      containerd.io
    
    STDERR:
    E: Packages were downgraded and -y was used without --allow-downgrades.
    

Workaround:

Warning

Apply the steps below to the affected nodes one-by-one and only after each consecutive node gets stuck on the Deploy phase with the Ansible log errors. Such sequence ensures that each node is cordon-drained and Docker is properly stopped. Therefore, no workloads are affected.

  1. Using SSH, log in to the first affected node and install containerd 1.5.8:

    apt-get install containerd.io=1.5.8-1 -y --allow-downgrades --allow-change-held-packages
    
  2. Wait for Ansible to reconcile. The node should become Ready in several minutes.

  3. Wait for the next node of the cluster to get stuck on the Deploy phase with the Ansible log errors. Only after that, apply the steps above on the next node.

  4. Patch the remaining nodes one-by-one using the steps above.


Container Cloud web UI
[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.



Cluster health
[21494] Controller pods are OOMkilled after deployment

Fixed in 2.17.0

After a successful deployment of a management or regional cluster, controller pods may be OOMkilled and get stuck in CrashLoopBackOff state due to incorrect memory limits.

Workaround:

Increase memory resources limits on the affected Deployment:

  1. Open the affected Deployment configuration for editing:

    kubectl --kubeconfig <mgmtOrRegionalKubeconfig> -n kaas edit deployment <deploymentName>
    
  2. Update the value of spec.template.spec.containers.resources.limits by 100-200 Mi. For example:

    spec:
      template:
        spec:
          containers:
          - ...
            resources:
              limits:
                cpu: "3"
                memory: 500Mi
              requests:
                cpu: "1"
                memory: 300Mi
    
Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.16.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.29.6

aws-credentials-controller

1.29.6

Azure Updated

azure-provider

1.29.6

azure-credentials-controller

1.29.6

Bare metal

ambassador

1.20.1-alpine

baremetal-operator Updated

6.1.2

baremetal-public-api Updated

6.1.3

baremetal-provider Updated

1.29.9

baremetal-resource-controller Updated

base-focal-20220128182941

ironic Updated

victoria-bionic-20220208100053

ironic-operator Updated

base-focal-20220217095047

kaas-ipam Updated

base-focal-20220131093130

keepalived Updated

2.1.5

local-volume-provisioner

2.5.0-mcp

mariadb Updated

10.4.17-bionic-20220113085105

IAM

iam Updated

2.4.14

iam-controller Updated

1.29.6

keycloak

15.0.2

Container Cloud

admission-controller Updated

1.29.7

agent-controller Updated

1.29.6

byo-credentials-controller Updated

1.29.6

byo-provider Updated

1.29.6

cert-manager New

1.29.6

client-certificate-controller New

1.29.6

event-controller New

1.29.6

golang Updated

1.17.6

kaas-public-api Updated

1.29.6

kaas-exporter Updated

1.29.6

kaas-ui Updated

1.29.6

lcm-controller Updated

0.3.0-187-gba894556

license-controller New

1.29.6

mcc-cache Updated

1.29.6

portforward-controller Updated

1.29.6

proxy-controller Updated

1.29.6

rbac-controller Updated

1.29.6

release-controller Updated

1.29.7

rhellicense-controller Updated

1.29.6

scope-controller Updated

1.29.6

squid-proxy

0.0.1-6

user-controller Updated

1.29.6

Equinix Metal Updated

equinix-provider

1.29.6

equinix-credentials-controller

1.29.6

keepalived

2.1.5

OpenStack Updated

openstack-provider

1.29.6

os-credentials-controller

1.29.6

VMware vSphere Updated

vsphere-provider

1.29.6

vsphere-credentials-controller

1.29.6

keepalived

2.1.5

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.16.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-6.1.2.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-6.1.3.tgz

ironic-python-agent-bionic.kernel Removed

Replaced with ironic-python-agent.kernel

ironic-python-agent-bionic.initramfs Removed

Replaced with ironic-python-agent.initramfs

ironic-python-agent.initramfs New

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-victoria-focal-debug-20220208120746

ironic-python-agent.kernel New

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-victoria-focal-debug-20220208120746

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-6.1.2.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-2.5.0-mcp.tgz

provisioning_ansible Updated

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-102-08af94e.tgz

target ubuntu system

https://binary.mirantis.com/bm/bin/efi/ubuntu/tgz-bionic-20210622161844

Docker images

ambassador

mirantis.azurecr.io/general/external/docker.io/library/nginx:1.20.1-alpine

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-focal-20220208045851

baremetal-resource-controller Updated

mirantis.azurecr.io/bm/baremetal-resource-controller:base-focal-20220128182941

dynamic_ipxe New

mirantis.azurecr.io/bm/dnsmasq/dynamic-ipxe:base-focal-20220126144549

dnsmasq Updated

mirantis.azurecr.io/general/dnsmasq:focal-20210617094827

ironic Updated

mirantis.azurecr.io/openstack/ironic:victoria-bionic-20220208100053

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:victoria-bionic-20220208100053

ironic-operator Updated

mirantis.azurecr.io/bm/ironic-operator:base-focal-20220217095047

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20210608113804

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-focal-20220131093130

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20220113085105

mcc-keepalived New

mirantis.azurecr.io/lcm/mcc-keepalived:v0.14.0-1-g8725814

syslog-ng Updated

mirantis.azurecr.io/bm/syslog-ng:base-focal-20220128103433


Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.29.6.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.29.6.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.29.7.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.29.6.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.29.6.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.29.6.tgz

azure-credentials-controller

https://binary.mirantis.com/core/helm/azure-credentials-controller-1.29.6.tgz

azure-provider

https://binary.mirantis.com/core/helm/azure-provider-1.29.6.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.29.9.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.29.6.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.29.6.tgz

cert-manager New

https://binary.mirantis.com/core/helm/cert-manager-1.29.6.tgz

client-certificate-controller New

https://binary.mirantis.com/core/helm/client-certificate-controller-1.29.6.tgz

equinix-credentials-controller

https://binary.mirantis.com/core/helm/equinix-credentials-controller-1.29.6.tgz

equinix-provider

https://binary.mirantis.com/core/helm/equinix-provider-1.29.6.tgz

equinixmetalv2-provider

https://binary.mirantis.com/core/helm/equinixmetalv2-provider-1.29.6.tgz

event-controller New

https://binary.mirantis.com/core/helm/event-controller-1.29.6.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.29.6.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.29.6.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.29.6.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.29.6.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.29.6.tgz

license-controller New

https://binary.mirantis.com/core/helm/license-controller-1.29.6.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.29.6.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.29.6.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.29.6.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.29.6.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.29.6.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.29.6.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.29.7.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.29.6.tgz

scope-controller

http://binary.mirantis.com/core/helm/scope-controller-1.29.6.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.29.6.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.29.6.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.29.6.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.29.6.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.29.7

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.29.6

aws-cluster-api-controller Updated

mirantis.azurecr.io/core/aws-cluster-api-controller:1.29.6

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.29.6

azure-cluster-api-controller Updated

mirantis.azurecr.io/core/azure-cluster-api-controller:1.29.6

azure-credentials-controller Updated

mirantis.azurecr.io/core/azure-credentials-controller:1.29.6

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.29.6

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.29.6

cert-manager-controller New

mirantis.azurecr.io/core/external/cert-manager-controller:v1.6.1

client-certificate-controller New

mirantis.azurecr.io/core/client-certificate-controller:1.29.6

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.29.6

cluster-api-provider-equinix Updated

mirantis.azurecr.io/core/cluster-api-provider-equinix:1.29.6

equinix-credentials-controller Updated

mirantis.azurecr.io/core/equinix-credentials-controller:1.29.6

frontend Updated

mirantis.azurecr.io/core/frontend:1.29.6

haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.12.0-8-g6fabf1c

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.29.6

kproxy Updated

mirantis.azurecr.io/lcm/kproxy:1.29.6

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:v0.3.0-187-gba894556

license-controller New

mirantis.azurecr.io/core/license-controller:1.29.6

mcc-keepalived New

mirantis.azurecr.io/lcm/mcc-keepalived:v0.14.0-1-g8725814

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.29.6

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.29.6

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.29.6

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.29.6

registry

mirantis.azurecr.io/lcm/registry:2.7.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.29.7

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.29.6

scope-controller Updated

mirantis.azurecr.io/core/scope-controller:1.29.6

squid-proxy

mirantis.azurecr.io/core/squid-proxy:0.0.1-6

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-api-controller:1.29.6

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.29.6

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.29.6


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

iamctl-linux Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.5-linux

iamctl-darwin Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.5-darwin

iamctl-windows Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.5-windows

Helm charts

iam Updated

http://binary.mirantis.com/iam/helm/iam-2.4.14.tgz

iam-proxy Updated

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.12.tgz

keycloak_proxy Updated

http://binary.mirantis.com/core/helm/keycloak_proxy-1.29.8.tgz

Docker images

api Deprecated

mirantis.azurecr.io/iam/api:0.5.5

auxiliary Updated

mirantis.azurecr.io/iam/auxiliary:0.5.5

kubernetes-entrypoint

mirantis.azurecr.io/iam/external/kubernetes-entrypoint:v0.3.1

mariadb

mirantis.azurecr.io/general/mariadb:10.4.16-bionic-20201105025052

keycloak

mirantis.azurecr.io/iam/keycloak:0.5.4

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-2

2.15.1

The Mirantis Container Cloud GA release 2.15.1 is based on 2.15.0 and:

  • Introduces support for the Cluster release 8.5.0 that is based on the Cluster release 7.5.0 and represents Mirantis OpenStack for Kubernetes (MOSK) 22.1. This Cluster release is based on Mirantis Kubernetes Engine 3.4.6 with Kubernetes 1.20 and Mirantis Container Runtime 20.10.8.

  • Supports the latest Cluster releases 7.5.0 and 5.22.0.

  • Does not support new deployments based on the Cluster releases 7.4.0 and 5.21.0 that were deprecated in 2.15.0.

For details about the Container Cloud release 2.15.1, refer to its parent release 2.15.0:

Caution

Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

2.15.0

The Mirantis Container Cloud GA release 2.15.0:

  • Introduces support for the Cluster release 7.5.0 that is based on Mirantis Container Runtime 20.10.8 and the updated version of Mirantis Kubernetes Engine 3.4.6 with Kubernetes 1.20.

  • Introduces support for the Cluster release 5.22.0 that is based on the updated version of Mirantis Kubernetes Engine 3.3.13 with Kubernetes 1.18 and Mirantis Container Runtime 20.10.8.

  • Supports the Cluster release 6.20.0 that is based on the Cluster release 5.20.0 and represents Mirantis OpenStack for Kubernetes (MOS) 21.6.

  • Does not support greenfield deployments on deprecated Cluster releases 7.4.0, 6.19.0, and 5.21.0. Use the latest Cluster releases of the series instead.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.15.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.15.0. For the list of enhancements in the Cluster releases 7.5.0 and 5.22.0 that are supported by the Container Cloud release 2.15.0, see the Cluster releases (managed).


Automatic upgrade of bare metal host operating system during cluster update

Introduced automatic upgrade of Ubuntu 18.04 packages on the bare metal hosts during a management or managed cluster update.

Mirantis Container Cloud uses life cycle management tools to update the operating system packages on the bare metal hosts. Container Cloud may also trigger restart of the bare metal hosts to apply the updates, when applicable.

Warning

During managed cluster update to the latest Cluster releases available in Container Cloud 2.15.0, hosts are restarted to apply the latest supported Ubuntu 18.04 packages and update kernel to version 5.4.0-90.101.

If Ceph is installed in the cluster, the Container Cloud orchestration securely pauses the Ceph OSDs on the node before restart. This allows avoiding degradation of the storage service.

Dedicated subnet for externally accessible Kubernetes API endpoint

TechPreview

Implemented a capability to add a dedicated subnet for the externally accessible Kubernetes API endpoint of a baremetal-based managed cluster.

HAProxy instead of NGINX for vSphere, Equinix Metal, and bare metal providers

Implemented a health check mechanism to verify target server availability by reworking the high availability setup for the Container Cloud manager nodes of the vSphere, Equinix Metal, and bare metal providers to use HAProxy instead of NGINX. This change affects only the Ansible part. HAproxy deploys as a container managed directly by containerd.

Additional regional cluster on Equinix Metal with private networking

Extended the regional clusters support by implementing the capability to deploy an additional regional cluster on Equinix Metal with private networking. This provides the capability to create managed clusters in the Equinix Metal regions with private networking in parallel with managed clusters of other supported providers within a single Container Cloud deployment.

Scheduled Container Cloud auto-upgrade

TechPreview

Introduced the initial Technology Preview support for a scheduled Container Cloud auto-upgrade using the MCCUpgrade object named mcc-upgrade in Kubernetes API.

An Operator can delay or reschedule Container Cloud auto-upgrade that allows:

  • Blocking Container Cloud upgrade process for up to 7 days from the current date and up to 30 days from the latest Container Cloud release

  • Limiting hours and weekdays when Container Cloud upgrade can run

Caution

Only the management cluster admin has access to the MCCUpgrade object. You must use kubeconfig generated during the management cluster bootstrap to access this object.

Note

Scheduling of the Container Cloud auto-upgrade using the Container Cloud web UI will be implemented in one of the following releases.

Cluster and machine maintenance mode

Implemented the maintenance mode for management and managed clusters and machines to prepare workloads for maintenance operations.

  • To enable maintenance mode on a machine, first enable maintenance mode on a related cluster.

  • To disable maintenance mode on a cluster, first disable maintenance mode on all machines of the cluster.

Warning

Cluster upgrades and configuration changes (except of the SSH keys setting) are unavailable while a cluster is under maintenance. Make sure you disable maintenance mode on the cluster after maintenance is complete.

Improvements for monitoring of machine deployment live status

Implemented the following improvements to the live status of a machine deployment that you can monitor using the Container Cloud web UI:

  • Increased the events coverage

  • Added information about cordon and drain (if a node is being cordoned, drained, or uncordoned) to the Kubelet and Swarm machine components statuses.

These improvements are implemented for all supported Container Cloud providers.

Deprecation of iam-api and IAM CLI

Deprecated the iam-api service and IAM CLI (the iamctl command). The logic of the iam-api service required for Container Cloud is moved to scope-controller. The iam-api service is used by IAM CLI only to manage users and permissions. Instead of IAM CLI, Mirantis recommends using the Keycloak web UI to perform necessary IAM operations.

The iam-api service and IAM CLI will be removed in one of the following Container Cloud releases.

Switch of Ceph Helm releases from v2 to v3

Upgraded the Ceph Helm releases in the ClusterRelease object from v2 to v3. Switching of the remaining OpenStack Helm releases for Mirantis OpenStack for Kubernetes to v3 will be implemented in one of the following Container Cloud releases.

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added the following procedures:

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.15.0 along with the Cluster releases 7.5.0 and 5.22.0:

  • vSphere:

    • [19737] Fixed the issue with the vSphere VM template build hanging with an empty kickstart file on the vSphere deployments with the RHEL 8.4 seed node.

    • [19468] Fixed the issue with the ‘Failed to remove finalizer from machine’ error during cluster deletion if a RHEL license is removed before the related managed cluster was deleted.

  • IAM:

    • [5025] Updated the Keycloak version from 12.0.0 to 15.0.2 to fix the CVE-2020-2757.

    • [21024][Custom certificates] Fixed the issue with the readiness check failure during addition of a custom certificate for Keycloak that hung with the failed to wait for OIDC certificate to be updated timeout warning.

  • StackLight:

    • [20193] Updated the Grafana Docker image from 8.2.2 to 8.2.7 to fix the high-severity CVE-2021-43798.

    • [18933] Fixed the issue with the Alerta pods failing to pass the readiness check even if Patroni, the Alerta backend, operated correctly.

    • [19682] Fixed the issue with the Prometheus web UI URLs in notifications sent to Salesforce using the HTTP protocol instead of HTTPS on deployments with TLS enabled for IAM.

  • Ceph:

    • [19645] Fixed the issue with the Ceph OSD removal request failure during the Processing stage.

    • [19574] Fixed the issue with the Ceph OSD removal not cleaning up the device used for multiple OSDs.

    • [20298] Fixed the issue with spec validation failing during creation of KaaSCephOperationRequest.

    • [20355] Fixed the issue with KaaSCephOperationRequest being cached after recreation with the same name, specified in metadata.name, as the previous KaaSCephOperationRequest CR. The issue caused no removal to be performed upon applying the new KaaSCephOperationRequest CR.

  • Bare metal:

    • [19786] Fixed the issue with managed cluster deployment failing on long-running management clusters with BareMetalHost being stuck in the Preparing state and the ironic-conductor and ironic-api pods reporting the not enough disk space error due to the dnsmasq-dhcpd logs overflow.

  • Upgrade:

    • [20459] Fixed the issue with failure to upgrade a management or regional cluster originally deployed using the Container Cloud release earlier than 2.8.0. The failure occurred during Ansible update if a machine contained /usr/local/share/ca-certificates/mcc.crt, which was either empty or invalid.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.15.0 including the Cluster releases 7.5.0 and 5.22.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


MKE
[20651] A cluster deployment or update fails with not ready compose deployments

A managed cluster deployment, attachment, or update to a Cluster release with MKE versions 3.3.13, 3.4.6, 3.5.1, or earlier may fail with the compose pods flapping (ready > terminating > pending) and with the following error message appearing in logs:

'not ready: deployments: kube-system/compose got 0/0 replicas, kube-system/compose-api
 got 0/0 replicas'
 ready: false
 type: Kubernetes

Workaround:

  1. Disable Docker Content Trust (DCT):

    1. Access the MKE web UI as admin.

    2. Navigate to Admin > Admin Settings.

    3. In the left navigation pane, click Docker Content Trust and disable it.

  2. Restart the affected deployments such as calico-kube-controllers, compose, compose-api, coredns, and so on:

    kubectl -n kube-system delete deployment <deploymentName>
    

    Once done, the cluster deployment or update resumes.

  3. Re-enable DCT.



Equinix Metal
[20467] Failure to deploy an Equinix Metal based management cluster

Fixed in 2.16.0

Deployment of an Equinix Metal based management cluster with private networking may fail with the following error message during the Ironic deployment. The issue is caused by csi-rbdplugin provisioner pods that got stuck.

0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.

The workaround is to restart the csi-rbdplugin provisioner pods:

kubectl -n rook-ceph delete pod -l app=csi-rbdplugin-provisioner


Bare metal
[20745] Namespace deletion failure after managed cluster removal

Fixed in 2.16.0

After removal of a managed cluster, the namespace is not deleted due to KaaSCephOperationRequest CRs blocking the deletion. The workaround is to manually remove finalizers and delete the KaaSCephOperationRequest CRs.

Workaround:

  1. Remove finalizers from all KaaSCephOperationRequest resources:

    kubectl -n <managed-ns> get kaascephoperationrequest -o name | xargs -I % kubectl -n <managed-ns> patch % -p '{"metadata":{"finalizers":{}}}' --type=merge
    
  2. Delete all KaaSCephOperationRequest resources:

    kubectl -n <managed-ns> delete kaascephoperationrequest --all
    

[17792] Full preflight fails with a timeout waiting for BareMetalHost

If you run bootstrap.sh preflight with KAAS_BM_FULL_PREFLIGHT=true, the script fails with the following message:

preflight check failed: preflight full check failed: \
error waiting for BareMetalHosts to power on: \
timed out waiting for the condition

Workaround:

  1. Unset full preflight using the unset KAAS_BM_FULL_PREFLIGHT environment variable.

  2. Rerun bootstrap.sh preflight that executes fast preflight instead.


IAM
[18331] Keycloak admin console menu disappears on ‘Add identity provider’ page

Fixed in 2.18.0

During configuration of an identity provider SAML using the Add identity provider menu of the Keycloak admin console, the page style breaks as well as the Save and Cancel buttons disappear.

Workaround:

  1. Log in to the Keycloak admin console.

  2. In the sidebar menu, switch to the Master realm.

  3. Navigate to Realm Settings > Themes.

  4. In the Admin Console Theme drop-down menu, select keycloak.

  5. Click Save and refresh the browser window to apply the changes.


LCM
[22341] The cordon-drain states are not removed after maintenance mode is unset

Fixed in 2.17.0

The cordon-drain states are not removed after the maintenance mode is unset for a machine. This issue may occur due to the maintenance transition being stuck on the NodeWorkloadLock object.

Workaround:

Select from the following options:

  • Disable the maintenance mode on the affected cluster as described in Enable cluster and machine maintenance mode.

  • Edit LCMClusterState in the spec section by setting value to "false":

    kubectl edit lcmclusterstates -n <projectName> <LCMCLusterStateName>
    
    apiVersion: lcm.mirantis.com/v1alpha1
    kind: LCMClusterState
    metadata:
      ...
    spec:
      ...
      value: "false"
    

Monitoring
[20876] StackLight pods get stuck with the ‘NodeAffinity failed’ error

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshoot StackLight.

On a managed cluster, the StackLight pods may get stuck with the Pod predicate NodeAffinity failed error in the pod status. The issue may occur if the StackLight node label was added to one machine and then removed from another one.

The issue does not affect the StackLight services, all required StackLight pods migrate successfully except extra pods that are created and stuck during pod migration.

As a workaround, remove the stuck pods:

kubectl --kubeconfig <managedClusterKubeconfig> -n stacklight delete pod <stuckPodName>
[21646] The kaas-exporter container is periodically throttled and OOMKilled

Fixed in 2.16.0

On the highly loaded clusters, the kaas-exporter resource limits for CPU and RAM are lower than the consumed amount of resources. As a result, the kaas-exporter container is periodically throttled and OOMKilled preventing the Container Cloud metrics gathering.

As a workaround, increase the default resource limits for kaas-exporter in the Cluster object of the management cluster. For example:

spec:
  ...
  providerSpec:
    ...
    value:
      ...
      kaas:
        management:
          helmReleases:
          ...
          - name: kaas-exporter
            values:
              resources:
                limits:
                  cpu: 100m
                  memory: 200Mi


Upgrade
[21810] Upgrade to Cluster releases 5.22.0 and 7.5.0 may get stuck

Affects Ubuntu-based clusters deployed after Feb 10, 2022

If you deploy an Ubuntu-based cluster using the deprecated Cluster release 7.4.0 (and earlier) or 5.21.0 (and earlier) starting from February 10, 2022, the cluster update to the Cluster releases 7.5.0 and 5.22.0 may get stuck while applying the Deploy state to the cluster machines. The issue affects all cluster types: management, regional, and managed.

To verify that the cluster is affected:

  1. Log in to the Container Cloud web UI.

  2. In the Clusters tab, capture the RELEASE and AGE values of the required Ubuntu-based cluster. If the values match the ones from the issue description, the cluster may be affected.

  3. Using SSH, log in to the manager or worker node that got stuck while applying the Deploy state and identify the containerd package version:

    containerd --version
    

    If the version is 1.5.9, the cluster is affected.

  4. In /var/log/lcm/runners/<nodeName>/deploy/, verify whether the Ansible deployment logs contain the following errors that indicate that the cluster is affected:

    The following packages will be upgraded:
      docker-ee docker-ee-cli
    The following packages will be DOWNGRADED:
      containerd.io
    
    STDERR:
    E: Packages were downgraded and -y was used without --allow-downgrades.
    

Workaround:

Warning

Apply the steps below to the affected nodes one-by-one and only after each consecutive node gets stuck on the Deploy phase with the Ansible log errors. Such sequence ensures that each node is cordon-drained and Docker is properly stopped. Therefore, no workloads are affected.

  1. Using SSH, log in to the first affected node and install containerd 1.5.8:

    apt-get install containerd.io=1.5.8-1 -y --allow-downgrades --allow-change-held-packages
    
  2. Wait for Ansible to reconcile. The node should become Ready in several minutes.

  3. Wait for the next node of the cluster to get stuck on the Deploy phase with the Ansible log errors. Only after that, apply the steps above on the next node.

  4. Patch the remaining nodes one-by-one using the steps above.

[20189] Container Cloud web UI reports upgrade while running previous release

Fixed in 2.16.0

Under certain conditions, the upgrade of the baremetal-based management cluster may get stuck even though the Container Cloud web UI reports a successful upgrade. The issue is caused by inconsistent metadata in IPAM that prevents automatic allocation of the Ceph network. It happens when IPAddr objects associated with the management cluster nodes refer to a non-existent Subnet object by the resource UID.

To verify whether the cluster is affected:

  1. Inspect the baremetal-provider logs:

    kubectl -n kaas logs deployments/baremetal-provider
    

    If the logs contain the following entries, the cluster may be affected:

    Ceph public network address validation failed for cluster default/kaas-mgmt: invalid address '0.0.0.0/0' \
    
    Ceph cluster network address validation failed for cluster default/kaas-mgmt: invalid address '0.0.0.0/0' \
    
    'default/kaas-mgmt' cluster nodes internal (LCM) IP addresses: 10.64.96.171,10.64.96.172,10.64.96.173 \
    
    failed to configure ceph network for cluster default/kaas-mgmt: \
    Ceph network addresses auto-assignment error: validation failed for Ceph network addresses: \
    error parsing address '': invalid CIDR address:
    

    Empty values of the network parameters in the last entry indicate that the provider cannot locate the Subnet object based on the data from the IPAddr object.

    Note

    In the logs, capture the internal (LCM) IP addresses of the cluster nodes to use them later in this procedure.

  2. Validate the network address used for Ceph by inspecting the MiraCeph object:

    kubectl -n ceph-lcm-mirantis get miraceph -o yaml | egrep "^ +clusterNet:"
    kubectl -n ceph-lcm-mirantis get miraceph -o yaml | egrep "^ +publicNet:"
    

    In the system response, verify that the clusterNet and publicNet values do not contain the 0.0.0.0/0 range.

    Example of the system response on the affected cluster:

    clusterNet: 0.0.0.0/0
    
    publicNet: 0.0.0.0/0
    

Workaround:

  1. Add a label to the Subnet object:

    Note

    To obtain the correct name of the label, use one of the cluster nodes internal (LCM) IP addresses from the baremetal-provider logs.

    1. Add SUBNETID as an environment variable to the IPAddr object. For example:

      SUBNETID=$(kubectl get ipaddr -n default --selector=ipam/IP=10.64.96.171 -o custom-columns=":metadata.labels.ipam/SubnetID" | tr -d '\n')
      
    2. Use the SUBNETID variable to restore the required label in the Subnet object:

      kubectl -n default label subnet master-region-one ipam/UID-${SUBNETID}="1"
      
  2. Verify that the cluster.sigs.k8s.io/cluster-name label exists for IPaddr objects:

    kubectl -n default get ipaddr --show-labels|grep "cluster.sigs.k8s.io/cluster-name"
    

    Skip the next step if all IPaddr objects corresponding to the management cluster nodes have this label.

  3. Add the cluster.sigs.k8s.io/cluster-name label to IPaddr objects:

    IPADDRNAMES=$(kubectl -n default get ipaddr -o custom-columns=":metadata.name")
    for IP in $IPADDRNAMES; do kubectl -n default label ipaddr $IP cluster.sigs.k8s.io/cluster-name=<managementClusterName>; done
    

    In the command above, substitute <managementClusterName> with the corresponding value.


[16379,23865] Cluster update fails with the FailedMount warning

Fixed in 2.19.0

An Equinix-based management or managed cluster fails to update with the FailedAttachVolume and FailedMount warnings.

Workaround:

  1. Verify that the description of the pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    
    • <affectedProjectName> is the Container Cloud project name where the pods failed to run

    • <affectedPodName> is a pod name that failed to run in this project

    In the pod description, identify the node name where the pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the pod that fails to init to 0 replicas.

  4. On every csi-rbdplugin pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state is Running.



Container Cloud web UI
[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.15.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.28.7

aws-credentials-controller

1.28.7

Azure Updated

azure-provider

1.28.7

azure-credentials-controller

1.28.7

Bare metal

ambassador

1.20.1-alpine

baremetal-operator Updated

6.0.4

baremetal-public-api Updated

6.0.4

baremetal-provider Updated

1.28.7

baremetal-resource-controller Updated

base-bionic-20211224163705

ironic Updated

victoria-bionic-20211213142623

ironic-operator

base-bionic-20210930105000

kaas-ipam Updated

base-bionic-20211213150212

local-volume-provisioner

2.5.0-mcp

mariadb

10.4.17-bionic-20210617085111

IAM

iam

2.4.10

iam-controller Updated

1.28.7

keycloak Updated

15.0.2

Container Cloud Updated

admission-controller

1.28.7 (1.28.18 for 2.15.1)

agent-controller

1.28.7

byo-credentials-controller

1.28.7

byo-provider

1.28.7

kaas-public-api

1.28.7

kaas-exporter

1.28.7

kaas-ui

1.28.8

lcm-controller

0.3.0-132-g83a348fa

mcc-cache

1.28.7

portforward-controller

1.28.12

proxy-controller

1.28.7

rbac-controller

1.28.7

release-controller

1.28.7

rhellicense-controller

1.28.7

scope-controller New

1.28.7

squid-proxy

0.0.1-6

user-controller

1.28.7

Equinix Metal Updated

equinix-provider

1.28.11

equinix-credentials-controller

1.28.7

OpenStack Updated

openstack-provider

1.28.7

os-credentials-controller

1.28.7

VMware vSphere Updated

vsphere-provider

1.28.7

vsphere-credentials-controller

1.28.7

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.15.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-6.0.4.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-6.0.4.tgz

ironic-python-agent-bionic.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-victoria-bionic-5.4-debug-20211126120723

ironic-python-agent-bionic.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-victoria-bionic-5.4-debug-20211126120723

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-6.0.4.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-2.5.0-mcp.tgz

provisioning_ansible Updated

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-88-02063c4.tgz

target ubuntu system

https://binary.mirantis.com/bm/bin/efi/ubuntu/tgz-bionic-20210622161844

Docker images

ambassador

mirantis.azurecr.io/general/external/docker.io/library/nginx:1.20.1-alpine

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-bionic-20211005112459

baremetal-resource-controller Updated

mirantis.azurecr.io/bm/baremetal-resource-controller:base-bionic-20211224163705

dnsmasq

mirantis.azurecr.io/general/dnsmasq:focal-20210617094827

ironic Updated

mirantis.azurecr.io/openstack/ironic:victoria-bionic-20211213142623

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:victoria-bionic-20211213142623

ironic-operator

mirantis.azurecr.io/bm/ironic-operator:base-bionic-20210930105000

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20210608113804

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-bionic-20211213150212

mariadb

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20210617085111

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-bionic-20210617094817


Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.28.7.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.28.7.tar.gz

Helm charts Updated

admission-controller 0

https://binary.mirantis.com/core/helm/admission-controller-1.28.7.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.28.7.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.28.7.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.28.7.tgz

azure-credentials-controller

https://binary.mirantis.com/core/helm/azure-credentials-controller-1.28.7.tgz

azure-provider

https://binary.mirantis.com/core/helm/azure-provider-1.28.7.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.28.7.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.28.7.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.28.7.tgz

equinix-credentials-controller

https://binary.mirantis.com/core/helm/equinix-credentials-controller-1.28.7.tgz

equinix-provider

https://binary.mirantis.com/core/helm/equinix-provider-1.28.11.tgz

equinixmetalv2-provider

https://binary.mirantis.com/core/helm/equinixmetalv2-provider-1.28.7.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.28.7.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.28.7.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.28.7.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.27.8.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.28.7.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.28.7.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.28.7.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.28.7.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.28.7.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.28.7.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.28.7.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.28.7.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.28.7.tgz

scope-controller New

http://binary.mirantis.com/core/helm/scope-controller-1.28.7.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.28.7.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.28.7.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.28.7.tgz

user-controller

https://binary.mirantis.com/core/helm/user-controller-1.28.7.tgz

Docker images

admission-controller 0 Updated

mirantis.azurecr.io/core/admission-controller:1.28.7

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.28.7

aws-cluster-api-controller Updated

mirantis.azurecr.io/core/aws-cluster-api-controller:1.28.7

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.28.7

azure-cluster-api-controller Updated

mirantis.azurecr.io/core/azure-cluster-api-controller:1.28.7

azure-credentials-controller Updated

mirantis.azurecr.io/core/azure-credentials-controller:1.28.7

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.28.7

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.28.7

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.28.7

cluster-api-provider-equinix Updated

mirantis.azurecr.io/core/cluster-api-provider-equinix:1.28.7

equinix-credentials-controller Updated

mirantis.azurecr.io/core/equinix-credentials-controller:1.28.7

frontend Updated

mirantis.azurecr.io/core/frontend:1.28.8

haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.12.0-8-g6fabf1c

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.28.7

kproxy Updated

mirantis.azurecr.io/lcm/kproxy:1.28.7

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:v0.3.0-132-g83a348fa

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.28.7

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.28.7

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.28.12

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.28.7

registry

mirantis.azurecr.io/lcm/registry:2.7.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.28.7

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.28.7

scope-controller New

mirantis.azurecr.io/core/scope-controller:1.28.7

squid-proxy Updated

mirantis.azurecr.io/core/squid-proxy:0.0.1-6

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-api-controller:1.28.7

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.28.7

user-controller Updated

mirantis.azurecr.io/core/user-controller:1.28.7

0(1,2)

In Container Cloud 2.15.1, the version of admission-controller is 1.28.18.


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

iamctl-linux

http://binary.mirantis.com/iam/bin/iamctl-0.5.4-linux

iamctl-darwin

http://binary.mirantis.com/iam/bin/iamctl-0.5.4-darwin

iamctl-windows

http://binary.mirantis.com/iam/bin/iamctl-0.5.4-windows

Helm charts

iam

http://binary.mirantis.com/iam/helm/iam-2.4.10.tgz

iam-proxy Updated

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.10.tgz

keycloak_proxy Updated

http://binary.mirantis.com/core/helm/keycloak_proxy-1.28.9.tgz

Docker images

api Deprecated

mirantis.azurecr.io/iam/api:0.5.4

auxiliary

mirantis.azurecr.io/iam/auxiliary:0.5.4

kubernetes-entrypoint

mirantis.azurecr.io/iam/external/kubernetes-entrypoint:v0.3.1

mariadb

mirantis.azurecr.io/general/mariadb:10.4.16-bionic-20201105025052

keycloak

mirantis.azurecr.io/iam/keycloak:0.5.4

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-2

Releases delivered in 2020-2021

This section contains historical information on the unsupported Container Cloud releases delivered in 2020-2021. For the latest supported Container Cloud release, see Container Cloud releases.

Unsupported Container Cloud releases 2020-2021

Version

Release date

Supported Cluster releases

Summary

2.14.0

Dec 07, 2021

  • Equinix Metal provider:

  • OpenStack provider:

    • Support of the community version of CentOS 7.9

    • Configuration of server metadata for machines in web UI

  • vSphere provider:

    • Initial RHEL 8.4 support TechPreview

    • Configuration of RAM and CPU for machines in web UI

  • Bare metal provider:

    • Visualization of service mapping in the bare metal IpamHost object

  • MKE support:

    • Support matrix of MKE versions for cluster attachment

    • MKE version update from 3.3.12 to 3.3.13 in the Cluster release 5.21.0 and from 3.4.5 to 3.4.6 in the Cluster release 7.4.0

  • IAM:

    • User access management through the Container Cloud API or web UI

    • Updated role naming used in Keycloak

  • LCM:

    • Switch of bare metal and StackLight Helm releases from v2 to v3

  • StackLight:

    • Network interfaces monitoring

    • Custom Prometheus recording rules

    • Syslog packet size configuration

    • Prometheus Relay configuration

  • Ceph:

    • Enhanced architecture

    • Networks validation

    • Automated Ceph OSD removal TechPreview

  • Container Cloud web UI:

    • The ‘Interface Guided Tour’ button in the Container Cloud web UI

2.13.1

Nov 11, 2021

Based on 2.13.0, this release introduces the Cluster release 6.20.0 that is based on 5.20.0 and supports Mirantis OpenStack for Kubernetes (MOS) 21.6.

For the list of Cluster releases 7.x and 5.x that are supported by 2.13.1 as well as for its features with addressed and known issues, refer to the parent release 2.13.0.

2.13.0

Oct 28, 2021

  • Configuration of multiple DHCP ranges for bare metal clusters

  • Updated RAM requirements for management and regional clusters

  • Improvements to StackLight alerting

  • Support for Telegraf 1.20.0

  • Documentation: How to renew the Container Cloud and MKE licenses

2.12.0

Oct 5, 2021

  • General availability of the Microsoft Azure cloud provider

  • Support for the Container Cloud deployment on top of MOS Victoria

  • TLS for all Container Cloud endpoints

  • LVM or mdadm RAID support for bare metal provisioning

  • Preparing state of a bare metal host

  • Migration of iam-proxy from Louketo Proxy to OAuth2 Proxy

  • Backup configuration for a MariaDB database on a management cluster

  • Renaming of the Container Cloud binary from kaas to container-cloud

  • MCR version update to 20.10.6

  • MKE version update to 3.4.5 for the Cluster release 7.2.0 and to 3.3.12 for Cluster releases 5.19.0, 6.19.0

  • Ceph:

    • Integration of the Ceph maintenance to the common upgrade procedure

    • Ceph RADOS Gateway tolerations

  • StackLight:

    • Short names for Kubernetes nodes in Grafana dashboards

    • Improvements to StackLight alerting

    • Logs-based metrics in StackLight

  • Documentation:

    • How to back up and restore an OpenStack or AWS-based management cluster

2.11.0

August 31, 2021

  • Technology Preview support for the Microsoft Azure cloud provider

  • RHEL 7.9 bootstrap node for the vSphere-based provider

  • Validation labels for the vSphere-based VM templates

  • Automatic migration of Docker data and LVP volumes to NVMe on AWS clusters

  • Switch of core Helm releases from v2 to v3

  • Bond interfaces for baremetal-based management clusters

  • Bare metal advanced configuration using web UI

  • Equinix Metal capacity labels for machines in web UI

  • Ceph:

    • Support for Ceph Octopus

    • Hyperconverged Ceph improvement

    • Ceph cluster status improvements

    • Ceph Manager modules

  • StackLight:

    • StackLight node labeling improvements

    • StackLight log level severity setting in web UI

    • Improvements to StackLight alerting

    • Salesforce feed update

  • Documentation:

    • How to manually remove a Ceph OSD from a Ceph cluster

    • How to update the Keycloak IP address on bare metal clusters

2.10.0

July 21, 2021

  • 7.x Cluster release series with updated versions of MCR 20.10.5, MKE 3.4.0, and Kubernetes 1.20.1

  • Support of MKE 3.3.3 - 3.3.6 and 3.4.0 for cluster attachment

  • Graceful MCR upgrade from 19.03.14 to 20.10.5

  • MKE logs gathering enhancements

  • VMware vSphere provider:

    • Initial CentOS support VMware vSphere provider

    • RHEL 7.9 support for the VMware vSphere provider

    • Removal of IAM and Keycloak IPs configuration

  • Ability to add or configure proxy on existing clusters

  • Command for creation of Keycloak users

  • Improvements to StackLight alerting

  • Log verbosity for StackLight components

  • Documentation:

    • How to move a Ceph Monitor daemon to another node

    • Manage user roles through Keycloak

2.9.0

June 15, 2021

  • Equinix Metal provider

  • Integration to Lens

  • New bootstrap node for additional regional clusters

  • TLS certificates for management cluster applications

  • Default Keycloak authorization in Container Cloud web UI

  • SSH keys management for mcc-user

  • vSphere resources controller

  • StackLight components upgrade

  • Ceph:

    • Multinetwork configuration

    • TLS for public endpoints

    • RBD mirroring support

2.8.0

May 18, 2021

  • Support for Keycloak 12.0

  • Ironic pod logs

  • LoadBalancer and ProviderInstance monitoring for cluster and machine statuses

  • Updated notification about outdated cluster version in web UI

  • StackLight improvements:

    • Notifications to Microsoft Teams

    • Notifications to ServiceNow

    • Log collection optimization

  • Ceph improvements:

    • Ceph default configuration options

    • Capability to define specifications for multiple Ceph nodes using lists

    • A number of new KaaSCephCluster configuration parameters

  • Documentation enhancements:

    • Ceph Monitors recovery

    • Silencing of StackLight alerts

2.7.0

April 22, 2021

  • Full support for the VMware vSphere provider

  • Universal SSH user

  • Configuration of SSH keys on existing clusters using web UI

  • Cluster and machines live statuses in web UI

  • Enabling of proxy access using web UI for vSphere, AWS, and bare metal

  • Log collection optimization in StackLight

  • Ceph enhancements:

    • Dedicated network for the Ceph distributed storage traffic

    • Ceph Multisite configuration

  • Documentation enhancements:

    • Ceph disaster recovery procedure

    • QuickStart guides

2.6.0

March 24, 2021

  • RHEL license activation using the activation key

  • Support for VMware vSphere Distributed Switch

  • VMware vSphere provider integration with IPAM controller

  • Proxy support for all Container Cloud providers

  • StackLight logging levels

  • StackLight remote logging to syslog

  • Hyperconverged Ceph

  • Ceph objectStorage section in KaasCephCluster

  • Ceph maintenance orchestration

  • Updated documentation on the bare metal networking

2.5.0

March 1, 2021

  • Support for Mirantis Kubernetes Engine 3.3.6

  • Support for Mirantis OpenStack for Kubernetes 21.1

  • Proxy support for OpenStack and VMware vSphere providers

  • NTP server configuration on regional clusters

  • Optimized ClusterRelease upgrade process

  • Dedicated network for external connection to the Kubernetes services on bare metal

  • Ceph RADOS Gateway HA

  • Ceph RADOS Gateway check box in Container Cloud web UI

  • Ceph maintenance label

  • Cerebro support for StackLight

  • Proxy support for StackLight

2.4.0

February 2, 2021

  • Support for the updated version of Mirantis Container Runtime 19.03.14

  • Dedicated network for Kubernetes pods traffic on bare metal clusters

  • Improvements for the feedback form in the Container Cloud web UI

  • StackLight enhancements:

    • Alert inhibition rules

    • Integration between Grafana and Kibana

    • New Telegraf alert TelegrafGatherErrors

    • Configuration of Ironic Telegraf input plugin

    • Automatically defined cluster ID

2.3.0

December 23, 2020

  • Support for Mirantis Kubernetes Engine 3.3.4 and Mirantis Container Runtime 19.03.13

  • Support for multiple host-specific L2 templates per a bare metal cluster

  • Additional regional cluster on VMware vSphere

  • Automated setup of a VM template for the VMware vSphere provider

  • StackLight support for VMware vSphere

  • Improvements in the Container Cloud logs collection

2.2.0

November 5, 2020

  • Support for VMware vSphere provider on RHEL

  • Kernel parameters management through BareMetalHostProfile

  • Support of multiple subnets per cluster

  • Optimization of the Container Cloud logs collection

  • Container Cloud API documentation for bare metal

2.1.0

October 19, 2020

  • Node labeling for machines

  • AWS resources discovery in the Container Cloud web UI

  • Credentials statuses for OpenStack and AWS in the Container Cloud web UI

  • StackLight improvements:

    • Grafana upgrade from version 6.6.2 to 7.1.5

    • Grafana Image Renderer pod to offload rendering of images from charts

    • Grafana home dashboard improvements

    • Splitting of the regional and management cluster function in StackLight telemetry to obtain aggregated metrics on the management cluster from regional and managed clusters

    • Amendments to the StackLight alerts

2.0.0

September 16, 2020

5.7.0

First GA release of Container Cloud with the following key features:

  • Container Cloud with Mirantis Kubernetes Engine (MKE) container clusters for the management plane

  • Support for managed Container Cloud with MKE container clusters on top of the AWS, OpenStack, and bare metal cloud providers

  • Support for attaching of the existing MKE standalone clusters

  • Ceph as a Kubernetes storage provider for the bare metal use case

  • Multi-region support for security and scalability

  • IAM integration with MKE container clusters to provide SSO

  • Logging, monitoring, and alerting tuned for MKE with data aggregation to the management cluster and telemetry sent to Mirantis

** - the Cluster release supports only attachment of existing MKE 3.3.4 clusters. For the deployment of new or attachment of existing clusters based on other supported MKE versions, the latest available Cluster releases are used.

2.14.0

The Mirantis Container Cloud GA release 2.14.0:

  • Introduces support for the Cluster release 7.4.0 that is based on Mirantis Container Runtime 20.10.6 and the updated version of Mirantis Kubernetes Engine 3.4.6 with Kubernetes 1.20.

  • Introduces support for the Cluster release 5.21.0 that is based on the updated version of Mirantis Kubernetes Engine 3.3.13 with Kubernetes 1.18 and Mirantis Container Runtime 20.10.6.

  • Supports the Cluster release 6.20.0 that is based on the Cluster release 5.20.0 and represents Mirantis OpenStack for Kubernetes (MOS) 21.6.

  • Supports deprecated Cluster releases 5.20.0, 6.19.0, and 7.3.0 that will become unsupported in the following Container Cloud releases.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.14.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.14.0. For the list of enhancements in the Cluster releases 7.4.0 and 5.21.0 that are supported by the Container Cloud release 2.14.0, see the Cluster releases (managed).


Support of the Equinix Metal provider with private networking

TechPreview

Introduced the Technology Preview support of Container Cloud deployments that are based on the Equinix Metal infrastructure with private networking.

Private networks are required for the following use cases:

  • Connect the Container Cloud to the on-premises corporate networks without exposing it to the Internet. This can be required by corporate security policies.

  • Reduce ingress and egress bandwidth costs and the number of public IP addresses utilized by the deployment. Public IP addresses are a scarce and valuable resource, and Container Cloud should only expose the necessary services in that address space.

  • Testing and staging environments typically do not require accepting connections from the outside of the cluster. Such Container Cloud clusters should be isolated in private VLANs.

Caution

The feature is supported starting from the Cluster releases 7.4.0 and 5.21.0.

Note

Support of the regional clusters that are based on Equinix Metal with private networking will be announced in one of the following Container Cloud releases.

Support of the community CentOS 7.9 version for the OpenStack provider

Introduced support of the community version of the CentOS 7.9 operating system for the management, regional, and managed clusters machines deployed with the OpenStack provider. The following CentOS resources are used:

Configuration of server metadata for OpenStack machines in web UI

Implemented the possibility to specify the cloud-init metadata during the OpenStack machines creation through the Container Cloud web UI. Server metadata is a set of string key-value pairs that you can configure in the meta_data field of cloud-init.

Initial RHEL 8.4 support for the vSphere provider

TechPreview

Introduced the initial Technology Preview support of the RHEL 8.4 operating system for the vSphere-based management, regional, and managed clusters.

Caution

Deployment of a Container Cloud cluster based on both RHEL and CentOS operating systems or on mixed RHEL versions is not supported.

Configuration of RAM and CPU for vSphere machines in web UI

Implemented the possibility to configure the following settings during a vSphere machine creation using the Container Cloud web UI:

  • VM memory size that defaults to 16 GB

  • VM CPUs number that defaults to 8

Visualization of service mapping in the bare metal IpamHost object

Implemented the following amendments to the ipam/SVC-* labels to simplify visualization of service mapping in the bare metal IpamHost object:

  • All IP addresses allocated from the Subnet` object that has the ipam/SVC-* service labels defined will inherit those labels

  • The new ServiceMap field in IpamHost.Status contains information about which IPs and interfaces correspond to which Container Cloud services.

Separation of PXE and management networks for bare metal clusters

Added the capability to configure a dedicated PXE network that is separated from the management network on management or regional bare metal clusters. A separate PXE network allows isolating sensitive bare metal provisioning process from the end users. The users still have access to Container Cloud services, such as Keycloak, to authenticate workloads in managed clusters, such as Horizon in a Mirantis OpenStack for Kubernetes cluster.

User access management through the Container Cloud API or web UI

Implemented the capability to manage user access through the Container Cloud API or web UI by introducing the following objects to manage user role bindings:

  • IAMUser

  • IAMRole

  • IAMGlobalRoleBinding

  • IAMRoleBinding

  • IAMClusterRoleBinding

Also, updated the role naming used in Keycloak by introducing the following IAM roles with the possibility to upgrade the old-style role names with the new-style ones:

  • global-admin

  • bm-pool-operator

  • operator

  • user

  • stacklight-admin

Caution

  • User management for the MOSK m:os roles through API or web UI is on the final development stage and will be announced in one of the following Container Cloud releases. Meanwhile, continue managing these roles using Keycloak.

  • The possibility to manage the IAM*RoleBinding objects through the Container Cloud web UI is available for the global-admin role only. The possibility to manage project role bindings using the operator role will become available in one of the following Container Cloud releases.

Support matrix of MKE versions for cluster attachment

Updated the matrix of supported MKE versions for cluster attachment to improve the upgrade and testing procedures:

  • Implemented separate Cluster release series to support 2 series of MKE versions for cluster attachment:

    • Cluster release series 9.x for the 3.3.x version series

    • Cluster release series 10.x for the 3.4.x version series

  • Added a requirement to update an existing MKE cluster to the latest available supported MKE version in a series to trigger the Container Cloud upgrade that allows updating its components, such as StackLight, to the latest versions.

    When a new MKE version for cluster attachment is released in a series, the oldest supported version of the previous Container Cloud release is dropped.

The ‘Interface Guided Tour’ button in the Container Cloud web UI

Added the Interface Guided Tour button to the Container Cloud web UI Help section for a handy access to the guided tour that steps you through the web UI key features of the multi-cluster multi-cloud Container Cloud platform.

Switch of bare metal and StackLight Helm releases from v2 to v3

Upgraded the bare metal and StackLight Helm releases in the ClusterRelease and KaasRelease objects from v2 to v3. Switching of the remaining Ceph and OpenStack Helm releases to v3 will be implemented in one of the following Container Cloud releases.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.14.0 along with the Cluster releases 7.4.0 and 5.21.0.

  • [18429][StackLight] Increased the default resource requirements for Prometheus Elasticsearch Exporter to prevent the KubeContainersCPUThrottlingHigh firing too often.

  • [18879][Ceph] Fixed the issue with the RADOS Gateway (RGW) pod overriding the global CA bundle located at /etc/pki/tls/certs with an incorrect self-signed CA bundle during deployment of a Ceph cluster.

  • [9899][Upgrade] Fixed the issue with Helm releases getting stuck in the PENDING_UPGRADE state during a management or managed cluster upgrade.

  • [18708][LCM] Fixed the issue with the Pending state of machines during deployment of any Container Cloud cluster or attachment of an existing MKE cluster due to some project being stuck in the Terminating state.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.14.0 including the Cluster releases 7.4.0, 6.20.0, and 5.21.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


Bare metal
[20745] Namespace deletion failure after managed cluster removal

Fixed in 2.16.0

After removal of a managed cluster, the namespace is not deleted due to KaaSCephOperationRequest CRs blocking the deletion. The workaround is to manually remove finalizers and delete the KaaSCephOperationRequest CRs.

Workaround:

  1. Remove finalizers from all KaaSCephOperationRequest resources:

    kubectl -n <managed-ns> get kaascephoperationrequest -o name | xargs -I % kubectl -n <managed-ns> patch % -p '{"metadata":{"finalizers":{}}}' --type=merge
    
  2. Delete all KaaSCephOperationRequest resources:

    kubectl -n <managed-ns> delete kaascephoperationrequest --all
    

[19786] Managed cluster deployment fails due to the dnsmasq-dhcpd logs overflow

Fixed in 2.15.0

A managed cluster deployment fails on long-running management clusters with BareMetalHost being stuck in the Preparing state and the ironic-conductor and ironic-api pods reporting the not enough disk space error due to the dnsmasq-dhcpd logs overflow.

Workaround:

  1. Log in to the ironic-conductor pod.

  2. Verify the free space in /volume/log/dnsmasq.

    • If the free space on a volume is less than 10%:

      1. Manually delete log files in /volume/log/dnsmasq/.

      2. Scale down the dnsmasq pod to 0 replicas:

        kubectl -n kaas scale deployment dnsmasq --replicas=0
        
      3. Scale up the dnsmasq pod to 1 replica:

        kubectl -n kaas scale deployment dnsmasq --replicas=1
        
    • If the volume has enough space, assess the Ironic logs to identify the root cause of the issue.


[17792] Full preflight fails with a timeout waiting for BareMetalHost

If you run bootstrap.sh preflight with KAAS_BM_FULL_PREFLIGHT=true, the script fails with the following message:

preflight check failed: preflight full check failed: \
error waiting for BareMetalHosts to power on: \
timed out waiting for the condition

Workaround:

  1. Unset full preflight using the unset KAAS_BM_FULL_PREFLIGHT environment variable.

  2. Rerun bootstrap.sh preflight that executes fast preflight instead.


vSphere
[19737] The vSphere VM template build hangs with an empty kickstart file

Fixed in 2.15.0

On the vSphere deployments with the RHEL 8.4 seed node, the VM template build for deployment hangs because of an empty kickstart file provided to the VM. In this case, the VMware web console displays the following error for the affected VM:

Kickstart file /run/install/ks.cfg is missing

The fix for the issue is implemented in the latest version of the Packer image for the VM template build.

Workaround:

  1. Open bootstrap.sh in the kaas-bootstrap folder for editing.

  2. Update the Docker image tag for the VSPHERE_PACKER_DOCKER_IMAGE variable to v1.0-39.

  3. Save edits and restart the VM template build:

    ./bootstrap.sh vsphere_template
    
[19468] ‘Failed to remove finalizer from machine’ error during cluster deletion

Fixed in 2.15.0

If a RHEL license is removed before the related managed cluster is deleted, the cluster deletion hangs with the following Machine object error:

Failed to remove finalizer from machine ...
failed to get RHELLicense object

As a workaround, recreate the removed RHEL license object with the same name using the Container Cloud web UI or API.

Warning

The kubectl apply command automatically saves the applied data as plain text into the kubectl.kubernetes.io/last-applied-configuration annotation of the corresponding object. This may result in revealing sensitive data in this annotation when creating or modifying the object.

Therefore, do not use kubectl apply on this object. Use kubectl create, kubectl patch, or kubectl edit instead.

If you used kubectl apply on this object, you can remove the kubectl.kubernetes.io/last-applied-configuration annotation from the object using kubectl edit.


[14080] Node leaves the cluster after IP address change

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

A vSphere-based management cluster bootstrap fails due to a node leaving the cluster after an accidental IP address change.

The issue may affect a vSphere-based cluster only when IPAM is not enabled and IP addresses assignment to the vSphere virtual machines is done by a DHCP server present in the vSphere network.

By default, a DHCP server keeps lease of the IP address for 30 minutes. Usually, a VM dhclient prolongs such lease by frequent DHCP requests to the server before the lease period ends. The DHCP prolongation request period is always less than the default lease time on the DHCP server, so prolongation usually works. But in case of network issues, for example, when dhclient from the VM cannot reach the DHCP server, or the VM is being slowly powered on for more than the lease time, such VM may lose its assigned IP address. As a result, it obtains a new IP address.

Container Cloud does not support network reconfiguration after the IP of the VM has been changed. Therefore, such issue may lead to a VM leaving the cluster.

Symptoms:

  • One of the nodes is in the NodeNotReady or down state:

    kubectl get nodes -o wide
    docker node ls
    
  • The UCP Swarm manager logs on the healthy manager node contain the following example error:

    docker logs -f ucp-swarm-manager
    
    level=debug msg="Engine refresh failed" id="<docker node ID>|<node IP>: 12376"
    
  • If the affected node is manager:

    • The output of the docker info command contains the following example error:

      Error: rpc error: code = Unknown desc = The swarm does not have a leader. \
      It's possible that too few managers are online. \
      Make sure more than half of the managers are online.
      
    • The UCP controller logs contain the following example error:

      docker logs -f ucp-controller
      
      "warning","msg":"Node State Active check error: \
      Swarm Mode Manager health check error: \
      info: Cannot connect to the Docker daemon at tcp://<node IP>:12376. \
      Is the docker daemon running?
      
  • On the affected node, the IP address on the first interface eth0 does not match the IP address configured in Docker. Verify the Node Address field in the output of the docker info command.

  • The following lines are present in /var/log/messages:

    dhclient[<pid>]: bound to <node IP> -- renewal in 1530 seconds
    

    If there are several lines where the IP is different, the node is affected.

Workaround:

Select from the following options:

  • Bind IP addresses for all machines to their MAC addresses on the DHCP server for the dedicated vSphere network. In this case, VMs receive only specified IP addresses that never change.

  • Remove the Container Cloud node IPs from the IP range on the DHCP server for the dedicated vSphere network and configure the first interface eth0 on VMs with a static IP address.

  • If a managed cluster is affected, redeploy it with IPAM enabled for new machines to be created and IPs to be assigned properly.


LCM
[6066] Helm releases get stuck in FAILED or UNKNOWN state

Note

The issue affects only Helm v2 releases and is addressed for Helm v3. Starting from Container Cloud 2.19.0, all Helm releases are switched to v3.

During a management, regional, or managed cluster deployment, Helm releases may get stuck in the FAILED or UNKNOWN state although the corresponding machines statuses are Ready in the Container Cloud web UI. For example, if the StackLight Helm release fails, the links to its endpoints are grayed out in the web UI. In the cluster status, providerStatus.helm.ready and providerStatus.helm.releaseStatuses.<releaseName>.success are false.

HelmBundle cannot recover from such states and requires manual actions. The workaround below describes the recovery steps for the stacklight release that got stuck during a cluster deployment. Use this procedure as an example for other Helm releases as required.

Workaround:

  1. Verify the failed release has the UNKNOWN or FAILED status in the HelmBundle object:

    kubectl --kubeconfig <regionalClusterKubeconfigPath> get helmbundle <clusterName> -n <clusterProjectName> -o=jsonpath={.status.releaseStatuses.stacklight}
    
    In the command above and in the steps below, replace the parameters
    enclosed in angle brackets with the corresponding values of your cluster.
    

    Example of system response:

    stacklight:
    attempt: 2
    chart: ""
    finishedAt: "2021-02-05T09:41:05Z"
    hash: e314df5061bd238ac5f060effdb55e5b47948a99460c02c2211ba7cb9aadd623
    message: '[{"occurrence":1,"lastOccurrenceDate":"2021-02-05 09:41:05","content":"error
      updating the release: rpc error: code = Unknown desc = customresourcedefinitions.apiextensions.k8s.io
      \"helmbundles.lcm.mirantis.com\" already exists"}]'
    notes: ""
    status: UNKNOWN
    success: false
    version: 0.1.2-mcp-398
    
  2. Log in to the helm-controller pod console:

    kubectl --kubeconfig <affectedClusterKubeconfigPath> exec -n kube-system -it helm-controller-0 sh -c tiller
    
  3. Download the Helm v3 binary. For details, see official Helm documentation.

  4. Remove the failed release:

    helm delete <failed-release-name>
    

    For example:

    helm delete stacklight
    

    Once done, the release triggers for redeployment.



IAM
[21024] Adding a custom certificate for Keycloak hangs with a timeout warning

Fixed in 2.15.0

Adding a custom certificate for Keycloak using the container-cloud binary hangs with the failed to wait for OIDC certificate to be updated timeout warning. The readiness check fails due to a wrong condition.

Ignore the timeout warning. If you can log in to the Container Cloud web UI, the certificate has been applied successfully.


[18331] Keycloak admin console menu disappears on ‘Add identity provider’ page

Fixed in 2.18.0

During configuration of an identity provider SAML using the Add identity provider menu of the Keycloak admin console, the page style breaks as well as the Save and Cancel buttons disappear.

Workaround:

  1. Log in to the Keycloak admin console.

  2. In the sidebar menu, switch to the Master realm.

  3. Navigate to Realm Settings > Themes.

  4. In the Admin Console Theme drop-down menu, select keycloak.

  5. Click Save and refresh the browser window to apply the changes.


StackLight
[18933] Alerta pods fail to pass the readiness check

Fixed in 2.15.0

Occasionally, an Alerta pod may be not Ready even if Patroni, the Alerta backend, operates correctly. In this case, some of the following errors may appear in the Alerta logs:

2021-10-25 13:10:55,865 DEBG 'nginx' stdout output:
2021/10/25 13:10:55 [crit] 25#25: *17408 connect() to unix:/tmp/uwsgi.sock failed (2: No such file or directory) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /api/config HTTP/1.1", upstream: "uwsgi://unix:/tmp/uwsgi.sock:", host: "127.0.0.1:8080"
ip=\- [\25/Oct/2021:13:10:55 +0000] "\GET /api/config HTTP/1.1" \502 \157 "\-" "\python-requests/2.24.0"
/web | /api/config | > GET /api/config HTTP/1.1
2021-11-11 00:02:23,969 DEBG 'nginx' stdout output:
2021/11/11 00:02:23 [error] 23#23: *2014 connect() to unix:/tmp/uwsgi.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 172.16.37.243, server: , request: "GET /api/services HTTP/1.1", upstream: "uwsgi://unix:/tmp/uwsgi.sock:", host: "10.233.113.143:8080"
ip=\- [\11/Nov/2021:00:02:23 +0000] "\GET /api/services HTTP/1.1" \502 \157 "\-" "\kube-probe/1.20+"
/web | /api/services | > GET /api/services HTTP/1.1

As a workaround, manually restart the affected Alerta pods:

kubectl delete pod -n stacklight <POD_NAME>
[19682] URLs in Salesforce alerts use HTTP for IAM with enabled TLS

Fixed in 2.15.0

Prometheus web UI URLs in StackLight notifications sent to Salesforce use a wrong protocol: HTTP instead of HTTPS. The issue affects deployments with TLS enabled for IAM.

The workaround is to manually change the URL protocol in the web browser.


Storage
[20312] Creation of ceph-based PVs gets stuck in Pending state

The csi-rbdplugin-provisioner pod (csi-provisioner container) may show constant retries attempting to create a PV if the csi-rbdplugin-provisioner pod was scheduled and started on a node with no connectivity to the Ceph storage. As a result, creation of a Ceph-based persistent volume (PV) may get stuck in the Pending state.

As a workaround manually specify the affinity or toleration rules for the csi-rbdplugin-provisioner pod.

Workaround:

  1. On the managed cluster, open the rook-ceph-operator-config map for editing:

    kubectl edit configmap -n rook-ceph rook-ceph-operator-config
    
  2. To avoid spawning pods on the nodes where this is not needed, set the provisioner node affinity specifying the required node labels. For example:

    CSI_PROVISIONER_NODE_AFFINITY: "role=storage-node; storage=rook, ceph"
    

Note

If needed, you can also specify CSI_PROVISIONER_TOLERATIONS tolerations. For example:

CSI_PROVISIONER_TOLERATIONS: |
  - effect: NoSchedule
    key: node-role.kubernetes.io/controlplane
    operator: Exists
  - effect: NoExecute
    key: node-role.kubernetes.io/etcd
    operator: Exists
[20355] KaaSCephOperationRequest is cached after recreation with the same name

Fixed in 2.15.0

When creating a new KaaSCephOperationRequest CR with the same name specified in metadata.name as in the previous KaaSCephOperationRequest CR, even if the previous request was deleted manually, the new request includes information about the previous actions and is in the Completed phase. In this case, no removal is performed.

Workaround:

  1. On the management cluster, manually delete the old KaasCephOperationRequest CR with the same metadata.name:

    kubectl -n ceph-lcm-mirantis delete KaasCephOperationRequest <name>
    
  2. On the managed cluster, manually delete the old CephOsdRemoveRequest with the same metadata.name:

    kubectl -n ceph-lcm-mirantis delete CephOsdRemoveRequest <name>
    
[20298] Spec validation failing during KaaSCephOperationRequest creation

Fixed in 2.15.0

Spec validation may fail with the following error when creating a KaaSCephOperationRequest CR:

The KaaSCephOperationRequest "test-remove-osd" is invalid: spec: Invalid value: 1:
spec in body should have at most 1 properties

Workaround:

  1. On the management cluster, open the kaascephoperationrequests.kaas.mirantis.com CRD for editing:

    kubectl edit crd kaascephoperationrequests.kaas.mirantis.com
    
  2. Remove maxProperties: 1 and minProperties: 1 from spec.versions[0].schema.openAPIV3Schema.properties.spec:

    spec:
      maxProperties: 1
      minProperties: 1
    
[19645] Ceph OSD removal request failure during ‘Processing’

Fixed in 2.15.0

Ocassionally, when Processing a Ceph OSD removal request, KaaSCephOperationRequest retries the osd stop command without an interval, which leads to removal request failure.

As a workaround create a new request to proceed with the Ceph OSD removal.

[19574] Ceph OSD removal does not clean up device used for multiple OSDs

Fixed in 2.15.0

When executing a Ceph OSD removal request to remove Ceph OSDs placed on one disk, the request completes without errors but the device itself still keeps the old LVM partitions. As a result, Rook cannot use such device.

The workaround is to manually clean up the affected device as described in Rook documentation: Zapping Devices.


Upgrade
[20459] Cluster upgrade fails with the certificate error during Ansible update

Fixed in 2.15.0

An upgrade of a management or regional cluster originally deployed using the Container Cloud release earlier than 2.8.0 fails with error setting certificate verify locations during Ansible update if a machine contains /usr/local/share/ca-certificates/mcc.crt, which is either empty or invalid. Managed clusters are not affected.

Workaround:

On every machine of the affected management or regional cluster:

  1. Delete /usr/local/share/ca-certificates/mcc.crt.

  2. In /etc/lcm/environment, remove the following line:

    export SSL_CERT_FILE="/usr/local/share/ca-certificates/mcc.crt"
    
  3. Restart lcm-agent:

    systemctl restart lcm-agent-v0.3.0-104-gb7f5e8d8
    

[20455] Cluster upgrade fails on the LCMMachine CRD update

An upgrade of a management or regional cluster originally deployed using the Container Cloud release earlier than 2.8.0 fails with:

  • The LCM Agent version not updating from v0.3.0-67-g25ab9f1a to v0.3.0-105-g6fb89599

  • The following error message appearing in the events of the related LCMMachine:

    kubectl describe lcmmachine <machineName>
    
    Failed to upgrade agent: failed to update agent upgrade status: \
    LCMMachine.lcm.mirantis.com "master-0" is invalid: \
    status.lcmAgentUpgradeStatus.finishedAt: Invalid value: "null": \
    status.lcmAgentUpgradeStatus.finishedAt in body must be of type string: "null"
    

As a workaround, change the preserveUnknownFields value for the LCMMachine CRD to false:

kubectl patch crd lcmmachines.lcm.mirantis.com -p '{"spec":{"preserveUnknownFields":false}}'

[4288] Equinix and MOS managed clusters update failure

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

The Equinix Metal and MOS-based managed clusters may fail to update to the latest Cluster release with kubelet being stuck and reporting authorization errors.

The cluster is affected by the issue if you see the Failed to make webhook authorizer request: context canceled error in the kubelet logs:

docker logs ucp-kubelet --since 5m 2>&1 | grep 'Failed to make webhook authorizer request: context canceled'

As a workaround, restart the ucp-kubelet container on the affected node(s):

ctr -n com.docker.ucp snapshot rm ucp-kubelet
docker rm -f ucp-kubelet

Note

Ignore failures in the output of the first command, if any.


[16379,23865] Cluster update fails with the FailedMount warning

Fixed in 2.19.0

An Equinix-based management or managed cluster fails to update with the FailedAttachVolume and FailedMount warnings.

Workaround:

  1. Verify that the description of the pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    
    • <affectedProjectName> is the Container Cloud project name where the pods failed to run

    • <affectedPodName> is a pod name that failed to run in this project

    In the pod description, identify the node name where the pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the pod that fails to init to 0 replicas.

  4. On every csi-rbdplugin pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state is Running.



Container Cloud web UI
[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.14.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.27.6

aws-credentials-controller

1.27.6

Azure Updated

azure-provider

1.27.6

azure-credentials-controller

1.27.6

Bare metal

ambassador Updated

1.20.1-alpine

baremetal-operator Updated

5.2.7

baremetal-public-api Updated

5.2.7

baremetal-provider Updated

1.27.6

ironic Updated

victoria-bionic-20211103083724

ironic-operator

base-bionic-20210930105000

kaas-ipam Updated

base-bionic-20211028140230

local-volume-provisioner Updated

2.5.0-mcp

mariadb

10.4.17-bionic-20210617085111

IAM

iam Updated

2.4.10

iam-controller Updated

1.27.6

keycloak

12.0.0

Container Cloud

admission-controller Updated

1.27.6

agent-controller Updated

1.27.6

byo-credentials-controller Updated

1.27.6

byo-provider Updated

1.27.6

kaas-public-api Updated

1.27.6

kaas-exporter Updated

1.27.6

kaas-ui Updated

1.27.8

lcm-controller Updated

0.3.0-105-g6fb89599

mcc-cache Updated

1.27.6

portforward-controller Updated

1.27.6

proxy-controller Updated

1.27.6

rbac-controller Updated

1.27.6

release-controller Updated

1.27.6

rhellicense-controller Updated

1.27.6

squid-proxy

0.0.1-5

user-controller New

1.27.9

Equinix Metal Updated

equinix-provider

1.27.6

equinix-credentials-controller

1.27.6

OpenStack Updated

openstack-provider

1.27.6

os-credentials-controller

1.27.6

VMware vSphere Updated

vsphere-provider

1.27.6

vsphere-credentials-controller

1.27.6

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.14.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-5.2.7.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-5.2.7.tgz

ironic-python-agent-bionic.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-victoria-bionic-debug-20210817124316

ironic-python-agent-bionic.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-victoria-bionic-debug-20210817124316

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-5.2.7.tgz

local-volume-provisioner Updated

https://binary.mirantis.com/bm/helm/local-volume-provisioner-2.5.0-mcp.tgz

provisioning_ansible

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-82-342bd22.tgz

target ubuntu system

https://binary.mirantis.com/bm/bin/efi/ubuntu/tgz-bionic-20210622161844

Docker images

ambassador Updated

mirantis.azurecr.io/lcm/nginx:1.20.1-alpine

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-bionic-20211005112459

dnsmasq

mirantis.azurecr.io/general/dnsmasq:focal-20210617094827

ironic Updated

mirantis.azurecr.io/openstack/ironic:victoria-bionic-20211103083724

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:victoria-bionic-20211103083724

ironic-operator

mirantis.azurecr.io/bm/ironic-operator:base-bionic-20210930105000

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20210608113804

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-bionic-20211028140230

mariadb

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20210617085111

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-bionic-20210617094817


Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.27.6.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.27.6.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.27.6.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.27.6.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.27.6.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.27.6.tgz

azure-credentials-controller

https://binary.mirantis.com/core/helm/azure-credentials-controller-1.27.6.tgz

azure-provider

https://binary.mirantis.com/core/helm/azure-provider-1.27.6.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.27.6.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.27.6.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.27.6.tgz

equinix-credentials-controller

https://binary.mirantis.com/core/helm/equinix-credentials-controller-1.27.6.tgz

equinix-provider

https://binary.mirantis.com/core/helm/equinix-provider-1.27.6.tgz

equinixmetalv2-provider New

https://binary.mirantis.com/core/helm/equinixmetalv2-provider-1.27.6.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.27.6.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.27.6.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.27.6.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.27.8.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.27.6.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.27.6.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.27.6.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.27.6.tgz

portforward-controller

https://binary.mirantis.com/core/helm/portforward-controller-1.27.6.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.27.6.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.27.6.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.27.6.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.27.6.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.27.6.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.27.6.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.27.6.tgz

user-controller New

https://binary.mirantis.com/core/helm/user-controller-1.27.9.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.27.6

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.27.6

aws-cluster-api-controller Updated

mirantis.azurecr.io/core/aws-cluster-api-controller:1.27.6

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.27.6

azure-cluster-api-controller Updated

mirantis.azurecr.io/core/azure-cluster-api-controller:1.27.6

azure-credentials-controller Updated

mirantis.azurecr.io/core/azure-credentials-controller:1.27.6

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.27.6

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.27.6

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.27.6

cluster-api-provider-equinix Updated

mirantis.azurecr.io/core/cluster-api-provider-equinix:1.27.6

equinix-credentials-controller Updated

mirantis.azurecr.io/core/equinix-credentials-controller:1.27.6

frontend Updated

mirantis.azurecr.io/core/frontend:1.27.8

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.27.6

kproxy Updated

mirantis.azurecr.io/lcm/kproxy:1.27.6

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:v0.3.0-105-g6fb89599

nginx

mirantis.azurecr.io/lcm/nginx:1.20.1-alpine

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.27.6

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.27.6

portforward-controller Updated

mirantis.azurecr.io/core/portforward-controller:1.27.6

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.27.6

registry

mirantis.azurecr.io/lcm/registry:2.7.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.27.6

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.27.6

squid-proxy

mirantis.azurecr.io/core/squid-proxy:0.0.1-5

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-api-controller:1.27.6

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.27.6

user-controller New

mirantis.azurecr.io/core/user-controller:1.27.9


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

iamctl-linux Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.4-linux

iamctl-darwin Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.4-darwin

iamctl-windows Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.4-windows

Helm charts

iam Updated

http://binary.mirantis.com/iam/helm/iam-2.4.10.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.9.tgz

keycloak_proxy

http://binary.mirantis.com/core/helm/keycloak_proxy-1.26.6.tgz

Docker images

api Updated

mirantis.azurecr.io/iam/api:0.5.4

auxiliary Updated

mirantis.azurecr.io/iam/auxiliary:0.5.4

kubernetes-entrypoint

mirantis.azurecr.io/iam/external/kubernetes-entrypoint:v0.3.1

mariadb

mirantis.azurecr.io/general/mariadb:10.4.16-bionic-20201105025052

keycloak Updated

mirantis.azurecr.io/iam/keycloak:0.5.4

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-2

2.13.1

The Mirantis Container Cloud GA release 2.13.1 is based on 2.13.0 and:

  • Introduces support for the Cluster release 6.20.0 that is based on the Cluster release 5.20.0 and represents Mirantis OpenStack for Kubernetes (MOS) 21.6. This Cluster release is based on Mirantis Kubernetes Engine 3.3.12 with Kubernetes 1.18 and Mirantis Container Runtime 20.10.6.

  • Supports the latest Cluster releases 7.2.0 and 5.20.0.

  • Supports deprecated Cluster releases 7.2.0, 6.19.0, and 5.19.0 that will become unsupported in the following Container Cloud releases.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

For details about the Container Cloud release 2.13.1, refer to its parent release 2.13.0.

2.13.0

The Mirantis Container Cloud GA release 2.13.0:

  • Introduces support for the Cluster release 7.3.0 that is based on Mirantis Container Runtime 20.10.6 and Mirantis Kubernetes Engine 3.4.5 with Kubernetes 1.20.

  • Introduces support for the Cluster release 5.20.0 that is based on Mirantis Kubernetes Engine 3.3.12 with Kubernetes 1.18 and Mirantis Container Runtime 20.10.6.

  • Supports the Cluster release 6.19.0 that is based on the Cluster release 5.19.0 and represents Mirantis OpenStack for Kubernetes (MOS) 21.5.

  • Supports deprecated Cluster releases 5.19.0, 6.18.0, and 7.2.0 that will become unsupported in the following Container Cloud releases.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.13.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.13.0. For the list of enhancements in the Cluster releases 7.3.0 and 5.20.0 that are supported by the Container Cloud release 2.13.0, see the Cluster releases (managed).


Configuration of multiple DHCP ranges for bare metal clusters

Implemented the possibility to configure multiple DHCP ranges using the bare metal Subnet resources to facilitate multi-rack and other types of distributed bare metal datacenter topologies. The dnsmasq DHCP server used for host provisioning in Container Cloud now supports working with multiple L2 segments through DHCP relay capable network routers.

To configure DHCP ranges for dnsmasq, create the Subnet objects tagged with the ipam/SVC-dhcp-range label while setting up subnets for a managed cluster using Container Cloud CLI.

Updated RAM requirements for management and regional clusters

To improve the Container Cloud performance and stability, increased RAM requirements for management and regional clusters from 16 to 24 GB for all supported cloud providers except bare metal, with the corresponding flavor changes for the AWS and Azure providers:

  • AWS: updated the instance type from c5d.2xlarge to c5d.4xlarge

  • Azure: updated the VM size from Standard_F8s_v2 to Standard_F16s_v2

For the Container Cloud managed clusters, requirements remain the same.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.13.0 along with the Cluster releases 7.3.0 and 5.20.0.

  • [17705][Azure] Fixed the issue with the failure to deploy more than 62 Azure worker nodes.

  • [17938][bare metal] Fixed the issue with the bare metal host profile being stuck in the match profile state during bootstrap.

  • [17960][bare metal] Fixed the issue with overflow of the Ironic storage volume causing a StackLight alert being triggered for the ironic-aio-pvc volume filling up.

  • [17981][bare metal] Fixed the issue with failure to redeploy a bare metal node with an mdadm-based raid1 enabled due to insufficient cleanup of RAID devices.

  • [17359][regional cluster] Fixed the issue with failure to delete an AWS-based regional cluster due to the issue with the cluster credential deletion.

  • [18193][upgrade] Fixed the issue with failure to upgrade an Equinix Metal or baremetal-based management cluster with Ceph cluster being not ready.

  • [18076][upgrade] Fixed the issue with StackLight update failure on managed cluster with logging disabled after changing NodeSelector.

  • [17771][StackLight] Fixed the issue with the Watchdog alert not routing to Salesforce by default.

    If you have applied the workaround as described in StackLight known issues: 17771, revert it after updating the Cluster releases to 5.20.0, 6.20.0, or 7.3.0:

    1. Open the StackLight configuration manifest as described in StackLight configuration procedure.

    2. In alertmanagerSimpleConfig.salesForce:

      • remove the match and march_re parameters since they are deprecated

      • remove the matchers parameter since it changes the default settings

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.13.0 including the Cluster releases 7.3.0, 6.19.0, and 5.20.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


Bare metal
[18752] Bare metal hosts in ‘provisioned registration error’ state after update

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

After update of a management or managed cluster created using the Container Cloud release earlier than 2.6.0, a bare metal host state is Provisioned in the Container Cloud web UI while having the error state in logs with the following message:

status:
  errorCount: 1
  errorMessage: 'Host adoption failed: Error while attempting to adopt node  7a8d8aa7-e39d-48ec-98c1-ed05eacc354f:
    Validation of image href http://10.10.10.10/images/stub_image.qcow2 failed,
    reason: Got HTTP code 404 instead of 200 in response to HEAD request..'
  errorType: provisioned registration error

The issue is caused by the image URL pointing to an unavailable resource due to the URI IP change during update. As a workaround, update URLs for the bare metal host status and spec with the correct values that use a stable DNS record as a host.

Workaround:

Note

In the commands below, we update master-2 as an example. Replace it with the corresponding value to fit your deployment.

  1. Exit Lens.

  2. In a new terminal, configure access to the affected cluster.

  3. Start kube-proxy:

    kubectl proxy &
    
  4. Pause the reconcile:

    kubectl patch bmh master-2 --type=merge --patch '{"metadata":{"annotations":{"baremetalhost.metal3.io/paused": "true"}}}'
    
  5. Create the payload data with the following content:

    • For status_payload.json:

      {
         "status": {
            "errorCount": 0,
            "errorMessage": "",
            "provisioning": {
               "image": {
                  "checksum": "http://httpd-http/images/stub_image.qcow2.md5sum",
                  "url": "http://httpd-http/images/stub_image.qcow2"
               },
               "state": "provisioned"
            }
         }
      }
      
    • For status_payload.json:

      {
         "spec": {
            "image": {
               "checksum": "http://httpd-http/images/stub_image.qcow2.md5sum",
               "url": "http://httpd-http/images/stub_image.qcow2"
            }
         }
      }
      
  6. Verify that the payload data is valid:

    cat status_payload.json | jq
    cat spec_payload.json | jq
    

    The system response must contain the data added in the previous step.

  7. Patch the bare metal host status with payload:

    curl -k -v -XPATCH -H "Accept: application/json" -H "Content-Type: application/merge-patch+json" --data-binary "@status_payload.json" 127.0.0.1:8001/apis/metal3.io/v1alpha1/namespaces/default/baremetalhosts/master-2/status
    
  8. Patch the bare metal host spec with payload:

    kubectl patch bmh master-2 --type=merge --patch "$(cat spec_payload.json)"
    
  9. Resume the reconcile:

    kubectl patch bmh master-2 --type=merge --patch '{"metadata":{"annotations":{"baremetalhost.metal3.io/paused":null}}}'
    
  10. Close the terminal to quit kube-proxy and resume Lens.

[17792] Full preflight fails with a timeout waiting for BareMetalHost

If you run bootstrap.sh preflight with KAAS_BM_FULL_PREFLIGHT=true, the script fails with the following message:

preflight check failed: preflight full check failed: \
error waiting for BareMetalHosts to power on: \
timed out waiting for the condition

Workaround:

  1. Unset full preflight using the unset KAAS_BM_FULL_PREFLIGHT environment variable.

  2. Rerun bootstrap.sh preflight that executes fast preflight instead.


OpenStack
[10424] Regional cluster cleanup fails by timeout

An OpenStack-based regional cluster cleanup fails with the timeout error.

Workaround:

  1. Wait for the Cluster object to be deleted in the bootstrap cluster:

    kubectl --kubeconfig <(./bin/kind get kubeconfig --name clusterapi) get cluster
    

    The system output must be empty.

  2. Remove the bootstrap cluster manually:

    ./bin/kind delete cluster --name clusterapi
    


vSphere
[19468] ‘Failed to remove finalizer from machine’ error during cluster deletion

Fixed in 2.15.0

If a RHEL license is removed before the related managed cluster is deleted, the cluster deletion hangs with the following Machine object error:

Failed to remove finalizer from machine ...
failed to get RHELLicense object

As a workaround, recreate the removed RHEL license object with the same name using the Container Cloud web UI or API.

Warning

The kubectl apply command automatically saves the applied data as plain text into the kubectl.kubernetes.io/last-applied-configuration annotation of the corresponding object. This may result in revealing sensitive data in this annotation when creating or modifying the object.

Therefore, do not use kubectl apply on this object. Use kubectl create, kubectl patch, or kubectl edit instead.

If you used kubectl apply on this object, you can remove the kubectl.kubernetes.io/last-applied-configuration annotation from the object using kubectl edit.


[14080] Node leaves the cluster after IP address change

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

A vSphere-based management cluster bootstrap fails due to a node leaving the cluster after an accidental IP address change.

The issue may affect a vSphere-based cluster only when IPAM is not enabled and IP addresses assignment to the vSphere virtual machines is done by a DHCP server present in the vSphere network.

By default, a DHCP server keeps lease of the IP address for 30 minutes. Usually, a VM dhclient prolongs such lease by frequent DHCP requests to the server before the lease period ends. The DHCP prolongation request period is always less than the default lease time on the DHCP server, so prolongation usually works. But in case of network issues, for example, when dhclient from the VM cannot reach the DHCP server, or the VM is being slowly powered on for more than the lease time, such VM may lose its assigned IP address. As a result, it obtains a new IP address.

Container Cloud does not support network reconfiguration after the IP of the VM has been changed. Therefore, such issue may lead to a VM leaving the cluster.

Symptoms:

  • One of the nodes is in the NodeNotReady or down state:

    kubectl get nodes -o wide
    docker node ls
    
  • The UCP Swarm manager logs on the healthy manager node contain the following example error:

    docker logs -f ucp-swarm-manager
    
    level=debug msg="Engine refresh failed" id="<docker node ID>|<node IP>: 12376"
    
  • If the affected node is manager:

    • The output of the docker info command contains the following example error:

      Error: rpc error: code = Unknown desc = The swarm does not have a leader. \
      It's possible that too few managers are online. \
      Make sure more than half of the managers are online.
      
    • The UCP controller logs contain the following example error:

      docker logs -f ucp-controller
      
      "warning","msg":"Node State Active check error: \
      Swarm Mode Manager health check error: \
      info: Cannot connect to the Docker daemon at tcp://<node IP>:12376. \
      Is the docker daemon running?
      
  • On the affected node, the IP address on the first interface eth0 does not match the IP address configured in Docker. Verify the Node Address field in the output of the docker info command.

  • The following lines are present in /var/log/messages:

    dhclient[<pid>]: bound to <node IP> -- renewal in 1530 seconds
    

    If there are several lines where the IP is different, the node is affected.

Workaround:

Select from the following options:

  • Bind IP addresses for all machines to their MAC addresses on the DHCP server for the dedicated vSphere network. In this case, VMs receive only specified IP addresses that never change.

  • Remove the Container Cloud node IPs from the IP range on the DHCP server for the dedicated vSphere network and configure the first interface eth0 on VMs with a static IP address.

  • If a managed cluster is affected, redeploy it with IPAM enabled for new machines to be created and IPs to be assigned properly.


LCM
[18708] ‘Pending’ state of machines during a cluster deployment or attachment

Fixed in 2.14.0

During deployment of any Container Cloud cluster or attachment of an existing MKE cluster that is not deployed by Container Cloud, the machines are stuck in the Pending state with no lcmcluster-controller entries from the lcm-controller logs except the following ones:

kubectl --kubeconfig <pathToMgmtOrRegionalClusterKubeconfig> logs lcm-lcm-controller-<controllerID> -n kaas | grep lcmcluster-controller

{"level":"info","ts":1634808016.777575,"logger":"controller-runtime.manager.controller.lcmcluster-controller","msg":"Starting EventSource","source":"kind   source: /, Kind="}
{"level":"info","ts":1634808016.8779392,"logger":"controller-runtime.manager.controller.lcmcluster-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}

The issue affects only clusters with the Container Cloud projects (Kubernetes namespaces) in the Terminating state.

Workaround:

  1. Verify the state of the Container Cloud projects:

    kubectl --kubeconfig <pathToMgmtOrRegionalClusterKubeconfig> get ns
    

    If any project is in the Terminating state, proceed to the next step. Otherwise, further assess the cluster logs to identify the root cause of the issue.

  2. Clean up the project that is stuck in the Terminating state:

    1. Identify the objects that are stuck in the project:

      kubectl --kubeconfig <pathToMgmtOrRegionalClusterKubeconfig> get ns <projectName> -o yaml
      

      Example of system response:

      ...
      status:
       conditions:
         ...
         - lastTransitionTime: "2021-10-19T17:05:23Z"
           message: 'Some resources are remaining: pods. has 1 resource instances'
           reason: SomeResourcesRemain
           status: "True"
           type: NamespaceContentRemaining
      
    2. Remove the metadata.finalizers field from the affected objects:

      kubectl --kubeconfig <pathToMgmtOrRegionalClusterKubeconfig> edit <objectType>/<objecName> -n <objectProjectName>
      
  3. Restart lcm-controller on the affected management or regional cluster:

    kubectl --kubeconfig <pathToMgmtOrRegionalClusterKubeconfig> get pod -n kaas | awk '/lcm-controller/ {print $1}' | xargs
    kubectl --kubeconfig <pathToMgmtOrRegionalClusterKubeconfig> delete pod -n kaas
    

[6066] Helm releases get stuck in FAILED or UNKNOWN state

Note

The issue affects only Helm v2 releases and is addressed for Helm v3. Starting from Container Cloud 2.19.0, all Helm releases are switched to v3.

During a management, regional, or managed cluster deployment, Helm releases may get stuck in the FAILED or UNKNOWN state although the corresponding machines statuses are Ready in the Container Cloud web UI. For example, if the StackLight Helm release fails, the links to its endpoints are grayed out in the web UI. In the cluster status, providerStatus.helm.ready and providerStatus.helm.releaseStatuses.<releaseName>.success are false.

HelmBundle cannot recover from such states and requires manual actions. The workaround below describes the recovery steps for the stacklight release that got stuck during a cluster deployment. Use this procedure as an example for other Helm releases as required.

Workaround:

  1. Verify the failed release has the UNKNOWN or FAILED status in the HelmBundle object:

    kubectl --kubeconfig <regionalClusterKubeconfigPath> get helmbundle <clusterName> -n <clusterProjectName> -o=jsonpath={.status.releaseStatuses.stacklight}
    
    In the command above and in the steps below, replace the parameters
    enclosed in angle brackets with the corresponding values of your cluster.
    

    Example of system response:

    stacklight:
    attempt: 2
    chart: ""
    finishedAt: "2021-02-05T09:41:05Z"
    hash: e314df5061bd238ac5f060effdb55e5b47948a99460c02c2211ba7cb9aadd623
    message: '[{"occurrence":1,"lastOccurrenceDate":"2021-02-05 09:41:05","content":"error
      updating the release: rpc error: code = Unknown desc = customresourcedefinitions.apiextensions.k8s.io
      \"helmbundles.lcm.mirantis.com\" already exists"}]'
    notes: ""
    status: UNKNOWN
    success: false
    version: 0.1.2-mcp-398
    
  2. Log in to the helm-controller pod console:

    kubectl --kubeconfig <affectedClusterKubeconfigPath> exec -n kube-system -it helm-controller-0 sh -c tiller
    
  3. Download the Helm v3 binary. For details, see official Helm documentation.

  4. Remove the failed release:

    helm delete <failed-release-name>
    

    For example:

    helm delete stacklight
    

    Once done, the release triggers for redeployment.



IAM
[18331] Keycloak admin console menu disappears on ‘Add identity provider’ page

Fixed in 2.18.0

During configuration of an identity provider SAML using the Add identity provider menu of the Keycloak admin console, the page style breaks as well as the Save and Cancel buttons disappear.

Workaround:

  1. Log in to the Keycloak admin console.

  2. In the sidebar menu, switch to the Master realm.

  3. Navigate to Realm Settings > Themes.

  4. In the Admin Console Theme drop-down menu, select keycloak.

  5. Click Save and refresh the browser window to apply the changes.


StackLight
[19682] URLs in Salesforce alerts use HTTP for IAM with enabled TLS

Fixed in 2.15.0

Prometheus web UI URLs in StackLight notifications sent to Salesforce use a wrong protocol: HTTP instead of HTTPS. The issue affects deployments with TLS enabled for IAM.

The workaround is to manually change the URL protocol in the web browser.

Storage
[20312] Creation of ceph-based PVs gets stuck in Pending state

The csi-rbdplugin-provisioner pod (csi-provisioner container) may show constant retries attempting to create a PV if the csi-rbdplugin-provisioner pod was scheduled and started on a node with no connectivity to the Ceph storage. As a result, creation of a Ceph-based persistent volume (PV) may get stuck in the Pending state.

As a workaround manually specify the affinity or toleration rules for the csi-rbdplugin-provisioner pod.

Workaround:

  1. On the managed cluster, open the rook-ceph-operator-config map for editing:

    kubectl edit configmap -n rook-ceph rook-ceph-operator-config
    
  2. To avoid spawning pods on the nodes where this is not needed, set the provisioner node affinity specifying the required node labels. For example:

    CSI_PROVISIONER_NODE_AFFINITY: "role=storage-node; storage=rook, ceph"
    

Note

If needed, you can also specify CSI_PROVISIONER_TOLERATIONS tolerations. For example:

CSI_PROVISIONER_TOLERATIONS: |
  - effect: NoSchedule
    key: node-role.kubernetes.io/controlplane
    operator: Exists
  - effect: NoExecute
    key: node-role.kubernetes.io/etcd
    operator: Exists
[18879] The RGW pod overrides the global CA bundle with an incorrect mount

Fixed in 2.14.0

During deployment of a Ceph cluster, the RADOS Gateway (RGW) pod overrides the global CA bundle located at /etc/pki/tls/certs with an incorrect self-signed CA bundle. The issue affects only clusters with public certificates.

Workaround:

  1. Open the KaasCephCluster CR of a managed cluster for editing:

    kubectl edit kaascephcluster -n <managedClusterProjectName>
    

    Substitute <managedClusterProjectName> with a corresponding value.

  2. Select from the following options:

    • If you are using the GoDaddy certificates, in the cephClusterSpec.objectStorage.rgw section, replace the cacert parameters with your public CA certificate that already contains both the root CA certificate and intermediate CA certificate:

      cephClusterSpec:
        objectStorage:
          rgw:
            SSLCert:
              cacert: |
                -----BEGIN CERTIFICATE-----
                ca-certificate here
                -----END CERTIFICATE-----
              tlsCert: |
                -----BEGIN CERTIFICATE-----
                private TLS certificate here
                -----END CERTIFICATE-----
              tlsKey: |
                -----BEGIN RSA PRIVATE KEY-----
                private TLS key here
                -----END RSA PRIVATE KEY-----
      
    • If you are using the DigiCert certificates:

      1. Download the <root_CA> from DigiCert.

      2. In the cephClusterSpec.objectStorage.rgw section, replace the cacert parameters with your public intermediate CA certificate along with the root one:

        cephClusterSpec:
          objectStorage:
            rgw:
              SSLCert:
                cacert: |
                  -----BEGIN CERTIFICATE-----
                  <root CA here>
                  <intermediate CA here>
                  -----END CERTIFICATE-----
                tlsCert: |
                  -----BEGIN CERTIFICATE-----
                  private TLS certificate here
                  -----END CERTIFICATE-----
                tlsKey: |
                  -----BEGIN RSA PRIVATE KEY-----
                  private TLS key here
                  -----END RSA PRIVATE KEY-----
        

[16300] ManageOsds works unpredictably on Rook 1.6.8 and Ceph 15.2.13

Affects only Container Cloud 2.11,0, 2.12,0, 2.13.0, and 2.13.1

Ceph LCM automatic operations such as Ceph OSD or Ceph node removal are unstable for the new Rook 1.6.8 and Ceph 15.2.13 (Ceph Octopus) versions and may cause data corruption. Therefore, manageOsds is disabled until further notice.

As a workaround, to safely remove a Ceph OSD or node from a Ceph cluster, perform the steps described in Remove Ceph OSD manually.



Upgrade
[4288] Equinix and MOS managed clusters update failure

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

The Equinix Metal and MOS-based managed clusters may fail to update to the latest Cluster release with kubelet being stuck and reporting authorization errors.

The cluster is affected by the issue if you see the Failed to make webhook authorizer request: context canceled error in the kubelet logs:

docker logs ucp-kubelet --since 5m 2>&1 | grep 'Failed to make webhook authorizer request: context canceled'

As a workaround, restart the ucp-kubelet container on the affected node(s):

ctr -n com.docker.ucp snapshot rm ucp-kubelet
docker rm -f ucp-kubelet

Note

Ignore failures in the output of the first command, if any.


[16379,23865] Cluster update fails with the FailedMount warning

Fixed in 2.19.0

An Equinix-based management or managed cluster fails to update with the FailedAttachVolume and FailedMount warnings.

Workaround:

  1. Verify that the description of the pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    
    • <affectedProjectName> is the Container Cloud project name where the pods failed to run

    • <affectedPodName> is a pod name that failed to run in this project

    In the pod description, identify the node name where the pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the pod that fails to init to 0 replicas.

  4. On every csi-rbdplugin pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state is Running.


[9899] Helm releases get stuck in PENDING_UPGRADE during cluster update

Fixed in 2.14.0

Helm releases may get stuck in the PENDING_UPGRADE status during a management or managed cluster upgrade. The HelmBundle Controller cannot recover from this state and requires manual actions. The workaround below describes the recovery process for the openstack-operator release that stuck during a managed cluster update. Use it as an example for other Helm releases as required.

Workaround:

  1. Log in to the helm-controller pod console:

    kubectl exec -n kube-system -it helm-controller-0 sh -c tiller
    
  2. Identify the release that stuck in the PENDING_UPGRADE status. For example:

    ./helm --host=localhost:44134 history openstack-operator
    

    Example of system response:

    REVISION  UPDATED                   STATUS           CHART                      DESCRIPTION
    1         Tue Dec 15 12:30:41 2020  SUPERSEDED       openstack-operator-0.3.9   Install complete
    2         Tue Dec 15 12:32:05 2020  SUPERSEDED       openstack-operator-0.3.9   Upgrade complete
    3         Tue Dec 15 16:24:47 2020  PENDING_UPGRADE  openstack-operator-0.3.18  Preparing upgrade
    
  3. Roll back the failed release to the previous revision:

    1. Download the Helm v3 binary. For details, see official Helm documentation.

    2. Roll back the failed release:

      helm rollback <failed-release-name>
      

      For example:

      helm rollback openstack-operator 2
      

    Once done, the release will be reconciled.



Container Cloud web UI
[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.13.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.26.6

aws-credentials-controller

1.26.6

Azure Updated

azure-provider

1.26.6

azure-credentials-controller

1.26.6

Bare metal

ambassador New

1.18.0

Bare metal

baremetal-operator Updated

5.2.3

baremetal-public-api Updated

5.2.3

baremetal-provider Updated

1.26.6

httpd Replaced with ambassador

n/a

ironic Updated

victoria-bionic-20211006090712

ironic-operator Updated

base-bionic-20210930105000

kaas-ipam Updated

base-bionic-20210930121606

local-volume-provisioner

1.0.6-mcp

mariadb

10.4.17-bionic-20210617085111

IAM

iam

2.4.8

iam-controller Updated

1.26.6

keycloak

12.0.0

Container Cloud

admission-controller Updated

1.26.6

agent-controller Updated

1.26.6

byo-credentials-controller Updated

1.26.6

byo-provider Updated

1.26.6

kaas-public-api Updated

1.26.6

kaas-exporter Updated

1.26.6

kaas-ui Updated

1.26.6

lcm-controller Updated

0.3.0-76-g3a45ff9e

mcc-cache Updated

1.26.6

portforward-controller New

1.26.6

proxy-controller Updated

1.26.6

rbac-controller Updated

1.26.6

release-controller Updated

1.26.6

rhellicense-controller Updated

1.26.6

squid-proxy

0.0.1-5

Equinix Metal Updated

equinix-provider

1.26.6

equinix-credentials-controller

1.26.6

OpenStack Updated

openstack-provider

1.26.6

os-credentials-controller

1.26.6

VMware vSphere Updated

vsphere-provider

1.26.6

vsphere-credentials-controller

1.26.6

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.13.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-5.2.3.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-5.2.3.tgz

ironic-python-agent-bionic.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-victoria-bionic-debug-20210817124316

ironic-python-agent-bionic.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-victoria-bionic-debug-20210817124316

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-5.2.3.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-1.0.6-mcp.tgz

provisioning_ansible Updated

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-82-342bd22.tgz

target ubuntu system

https://binary.mirantis.com/bm/bin/efi/ubuntu/tgz-bionic-20210622161844

Docker images

ambassador New

mirantis.azurecr.io/lcm/nginx:1.18.0

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-bionic-20211005112459

dnsmasq

mirantis.azurecr.io/general/dnsmasq:focal-20210617094827

httpd

n/a (replaced with ambassador)

ironic Updated

mirantis.azurecr.io/openstack/ironic:victoria-bionic-20211006090712

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:victoria-bionic-20211006090712

ironic-operator Updated

mirantis.azurecr.io/bm/ironic-operator:base-bionic-20210930105000

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20210608113804

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-bionic-20210930121606

mariadb

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20210617085111

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-bionic-20210617094817


Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.26.6.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.26.6.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.26.6.tgz

agent-controller

https://binary.mirantis.com/core/helm/agent-controller-1.26.6.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.26.6.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.26.6.tgz

azure-credentials-controller

https://binary.mirantis.com/core/helm/azure-credentials-controller-1.26.6.tgz

azure-provider

https://binary.mirantis.com/core/helm/azure-provider-1.26.6.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.26.6.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.26.6.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.26.6.tgz

equinix-credentials-controller

https://binary.mirantis.com/core/helm/equinix-credentials-controller-1.26.6.tgz

equinix-provider

https://binary.mirantis.com/core/helm/equinix-provider-1.26.6.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.26.6.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.26.6.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.26.6.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.26.6.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.26.6.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.26.6.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.26.6.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.26.6.tgz

portforward-controller New

https://binary.mirantis.com/core/helm/portforward-controller-1.26.6.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.26.6.tgz

rbac-controller

https://binary.mirantis.com/core/helm/rbac-controller-1.26.6.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.26.6.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.26.6.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.26.6.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.26.6.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.26.6.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.26.6

agent-controller Updated

mirantis.azurecr.io/core/agent-controller:1.26.6

aws-cluster-api-controller Updated

mirantis.azurecr.io/core/aws-cluster-api-controller:1.26.6

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.26.6

azure-cluster-api-controller Updated

mirantis.azurecr.io/core/azure-cluster-api-controller:1.26.6

azure-credentials-controller Updated

mirantis.azurecr.io/core/azure-credentials-controller:1.26.6

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.26.6

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.26.6

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.26.6

cluster-api-provider-equinix Updated

mirantis.azurecr.io/core/cluster-api-provider-equinix:1.26.6

equinix-credentials-controller Updated

mirantis.azurecr.io/core/equinix-credentials-controller:1.26.6

frontend Updated

mirantis.azurecr.io/core/frontend:1.26.6

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.26.6

kproxy Updated

mirantis.azurecr.io/lcm/kproxy:1.26.6

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:v0.3.0-76-g3a45ff9e

nginx Updated

mirantis.azurecr.io/lcm/nginx:1.20.1-alpine

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.26.6

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.26.6

portforward-controller New

mirantis.azurecr.io/core/portforward-controller:1.26.6

rbac-controller Updated

mirantis.azurecr.io/core/rbac-controller:1.26.6

registry

mirantis.azurecr.io/lcm/registry:2.7.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.26.6

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.26.6

squid-proxy

mirantis.azurecr.io/core/squid-proxy:0.0.1-5

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-api-controller:1.26.6

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.26.6


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

iamctl-linux

http://binary.mirantis.com/iam/bin/iamctl-0.5.3-linux

iamctl-darwin

http://binary.mirantis.com/iam/bin/iamctl-0.5.3-darwin

iamctl-windows

http://binary.mirantis.com/iam/bin/iamctl-0.5.3-windows

Helm charts

iam

http://binary.mirantis.com/iam/helm/iam-2.4.8.tgz

iam-proxy Updated

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.9.tgz

keycloak_proxy Updated

http://binary.mirantis.com/core/helm/keycloak_proxy-1.26.6.tgz

Docker images

api

mirantis.azurecr.io/iam/api:0.5.3

auxiliary

mirantis.azurecr.io/iam/auxiliary:0.5.3

kubernetes-entrypoint Updated

mirantis.azurecr.io/iam/external/kubernetes-entrypoint:v0.3.1

mariadb

mirantis.azurecr.io/general/mariadb:10.4.16-bionic-20201105025052

keycloak

mirantis.azurecr.io/iam/keycloak:0.5.3

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-2

2.12.0

The Mirantis Container Cloud GA release 2.12.0:

  • Introduces support for the Cluster release 7.2.0 that is based on Mirantis Container Runtime 20.10.6 and Mirantis Kubernetes Engine 3.4.5 with Kubernetes 1.20.

  • Introduces support for the Cluster release 5.19.0 that is based on Mirantis Kubernetes Engine 3.3.12 with Kubernetes 1.18 and Mirantis Container Runtime 20.10.6.

  • Introduces support for the Cluster release 6.19.0 that is based on the Cluster release 5.19.0 and represents Mirantis OpenStack for Kubernetes (MOS) 21.5.

  • Supports deprecated Cluster releases 5.18.0, 6.18.0, and 7.1.0 that will become unsupported in the following Container Cloud releases.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.12.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.12.0. For the list of enhancements in the Cluster releases 7.2.0, 6.19.0, and 5.19.0 that are supported by the Container Cloud release 2.12.0, see the Cluster releases (managed).


General availability of the Microsoft Azure cloud provider

Introduced official support for the Microsoft Azure cloud provider, including support for creating and operating of management, regional, and managed clusters.

Container Cloud deployment on top of MOS Victoria

Implemented the possibility to deploy Container Cloud management, regional, and managed clusters on top of Mirantis OpenStack for Kubernetes (MOS) Victoria that is based on the Open vSwitch networking.

LVM or mdadm RAID support for bare metal provisioning

TECHNOLOGY PREVIEW

Added the Technology Preview support for configuration of software-based Redundant Array of Independent Disks (RAID) using BareMetalHosProfile to set up LVM or mdadm-based RAID level 1 (raid1). If required, you can further configure RAID in the same profile, for example, to install a cluster operating system onto a RAID device.

You can configure RAID during a baremetal-based management or managed cluster creation. RAID configuration on already provisioned bare metal machines or on an existing cluster is not supported.

Caution

This feature is available as Technology Preview. Use such configuration for testing and evaluation purposes only. For the Technology Preview feature definition, refer to Technology Preview features.

Preparing state of a bare metal host

Added the Preparing state to the provisioning workflow of bare metal hosts. Bare Metal Operator inspects a bare metal host and moves it to the Preparing state. In this state, the host becomes ready to be linked to a bare metal machine.

TLS for all Container Cloud endpoints

Added the Transport Layer Security (TLS) configuration to all Container Cloud endpoints for all supported cloud providers. The Container Cloud web UI and StackLight endpoints are now available through TLS with self-signed certificates generated by the Container Cloud provider. If required, you can also add your own TLS certificates to the Container Cloud web UI and Keycloak.

Caution

After the Container Cloud upgrade from 2.11.0 to 2.12.0, all Container Cloud endpoints are available only through HTTPS.

Migration of iam-proxy from Louketo Proxy to OAuth2 Proxy

Migrated iam-proxy from the deprecated Louketo Proxy, formerly known as keycloak-proxy to OAuth2 Proxy.

To apply the migration, all iam-proxy services in the StackLight namespace are restarted during a management cluster upgrade or managed cluster update. This causes a short downtime for the web UI access to StackLight services, although all services themselves, such as Kibana or Grafana, continue working.

Backup configuration for a MariaDB database on a management cluster

Implemented the possibility to customize the default backup configuration for a MariaDB database on a management cluster. You can customize the default configuration either during a management cluster bootstrap or on an existing management cluster. The Kubernetes cron job responsible for the MariaDB backup is enabled by default for the OpenStack and AWS cloud providers and is disabled for other supported providers.

Renaming of the Container Cloud binary

In the scope of continuous improvement of the product, renamed the Container Cloud binary from kaas to container-cloud.

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added a procedure on how to back up and restore an OpenStack or AWS-based management cluster. The procedure consists of the MariaDB and MKE backup and restore steps.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.12.0 along with the Cluster releases 7.2.0, 6.19.0, and 5.19.0.

  • [16718][Equinix Metal] Fixed the issue with the Equinix Metal provider failing to create machines with an SSH key error if an Equinix Metal based cluster was being deployed in an Equinix Metal project with no SSH keys.

  • [17118][bare metal] Fixed the issue with failure to add a new machine to a baremetal-based managed cluster after the management cluster upgrade.

  • [16959][OpenStack] Fixed the issue with failure to create a proxy-based OpenStack regional cluster due to the issue with the proxy secret creation.

  • [13385][IAM] Fixed the issue with MariaDB pods failing to start after MariaDB blocked itself during the State Snapshot Transfers sync.

  • [8367][LCM] Fixed the issue with joining etcd from a new node to an existing etcd cluster. The issue caused the new managed node to hang in the Deploy state when adding it to a managed cluster.

  • [16873][bootstrap] Fixed the issue with a management cluster bootstrap failing with failed to establish connection with tiller error due to kind 0.9.0 delivered with the bootstrap script being not compatible with the latest Ubuntu 18.04 image that requires kind 0.11.1.

  • [16964][Ceph] Fixed the issue with a bare metal or Equinix Metal management cluster upgrade getting stuck and then failing with some Ceph daemons being stuck on upgrade to Octopus and with the insecure global_id reclaim health warning in Ceph logs.

  • [16843][StackLight] Fixed the issue causing inability to override default route matchers for Salesforce notifier.

    If you have applied the workaround as described in StackLight known issues: 16843 after updating the cluster releases to 5.19.0, 7.2.0, or 6.19.0 and if you need to define custom matchers, replace the deprecated match and match_re parameters with matchers as required. For details, see Deprecation notes and StackLight configuration parameters.

  • [17477][Update][StackLight] Fixed the issue with StackLight in HA mode placed on controller nodes being not deployed or cluster update being blocked. Once you update your Mirantis OpenStack for Kubernetes cluster from the Cluster release 6.18.0 to 6.19.0, roll back the workaround applied as described in Upgrade known issues: 17477:

    1. Remove stacklight labels from worker nodes. Wait for the labels to be removed.

    2. Remove the custom nodeSelector section from the cluster spec.

  • [16777][Update][StackLight] Fixed the issue causing the Cluster release update from 7.0.0 to 7.1.0 to fail due to failed Patroni pod. The issue affected the Container Cloud management, regional, or managed cluster of any cloud provider.

  • [17069][Update][Ceph] Fixed the issue with upgrade of a bare metal or Equinix Metal based management or managed cluster failing with the Failed to configure Ceph cluster error due to different versions of the rook-ceph-osd deployments.

  • [17007][Update] Fixed the issue with the false-positive release: “squid-proxy” not found error during a management cluster upgrade of any supported cloud provider except vSphere.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.12.0 including the Cluster releases 7.2.0, 6.19.0, and 5.19.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


AWS
[8013] Managed cluster deployment requiring PVs may fail

Fixed in the Cluster release 7.0.0

Note

The issue below affects only the Kubernetes 1.18 deployments. Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

On a management cluster with multiple AWS-based managed clusters, some clusters fail to complete the deployments that require persistent volumes (PVs), for example, Elasticsearch. Some of the affected pods get stuck in the Pending state with the pod has unbound immediate PersistentVolumeClaims and node(s) had volume node affinity conflict errors.

Warning

The workaround below applies to HA deployments where data can be rebuilt from replicas. If you have a non-HA deployment, back up any existing data before proceeding, since all data will be lost while applying the workaround.

Workaround:

  1. Obtain the persistent volume claims related to the storage mounts of the affected pods:

    kubectl get pod/<pod_name1> pod/<pod_name2> \
    -o jsonpath='{.spec.volumes[?(@.persistentVolumeClaim)].persistentVolumeClaim.claimName}'
    

    Note

    In the command above and in the subsequent steps, substitute the parameters enclosed in angle brackets with the corresponding values.

  2. Delete the affected Pods and PersistentVolumeClaims to reschedule them: For example, for StackLight:

    kubectl -n stacklight delete \
    
      pod/<pod_name1> pod/<pod_name2> ...
      pvc/<pvc_name2> pvc/<pvc_name2> ...
    


Azure
[17705] Failure to deploy more than 62 Azure worker nodes

Fixed in 2.13.0

The default value of the Ports per instance load balancer outbound NAT setting that is 1024 prevents from deploying more than 62 Azure worker nodes on a managed cluster. To workaround the issue, set the Ports per instance parameter to 256.

Workaround:

  1. Log in to the Azure portal.

  2. Navigate to Home > Load Balancing.

  3. Find and click the load balancer called mcc-<uniqueClusterID>. You can obtain <uniqueClusterID> in the Cluster info field in the Container Cloud web UI.

  4. In the load balancer Settings left-side menu, click Outbound rules > OutboundNATAllProtocols.

  5. In the Outbound ports > Choose by menu, select Ports per instance.

  6. In the Ports per instance field, replace the default 1024 value with 256.

  7. Click Save to apply the new setting.



Bare metal
[18752] Bare metal hosts in ‘provisioned registration error’ state after update

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

After update of a management or managed cluster created using the Container Cloud release earlier than 2.6.0, a bare metal host state is Provisioned in the Container Cloud web UI while having the error state in logs with the following message:

status:
  errorCount: 1
  errorMessage: 'Host adoption failed: Error while attempting to adopt node  7a8d8aa7-e39d-48ec-98c1-ed05eacc354f:
    Validation of image href http://10.10.10.10/images/stub_image.qcow2 failed,
    reason: Got HTTP code 404 instead of 200 in response to HEAD request..'
  errorType: provisioned registration error

The issue is caused by the image URL pointing to an unavailable resource due to the URI IP change during update. As a workaround, update URLs for the bare metal host status and spec with the correct values that use a stable DNS record as a host.

Workaround:

Note

In the commands below, we update master-2 as an example. Replace it with the corresponding value to fit your deployment.

  1. Exit Lens.

  2. In a new terminal, configure access to the affected cluster.

  3. Start kube-proxy:

    kubectl proxy &
    
  4. Pause the reconcile:

    kubectl patch bmh master-2 --type=merge --patch '{"metadata":{"annotations":{"baremetalhost.metal3.io/paused": "true"}}}'
    
  5. Create the payload data with the following content:

    • For status_payload.json:

      {
         "status": {
            "errorCount": 0,
            "errorMessage": "",
            "provisioning": {
               "image": {
                  "checksum": "http://httpd-http/images/stub_image.qcow2.md5sum",
                  "url": "http://httpd-http/images/stub_image.qcow2"
               },
               "state": "provisioned"
            }
         }
      }
      
    • For status_payload.json:

      {
         "spec": {
            "image": {
               "checksum": "http://httpd-http/images/stub_image.qcow2.md5sum",
               "url": "http://httpd-http/images/stub_image.qcow2"
            }
         }
      }
      
  6. Verify that the payload data is valid:

    cat status_payload.json | jq
    cat spec_payload.json | jq
    

    The system response must contain the data added in the previous step.

  7. Patch the bare metal host status with payload:

    curl -k -v -XPATCH -H "Accept: application/json" -H "Content-Type: application/merge-patch+json" --data-binary "@status_payload.json" 127.0.0.1:8001/apis/metal3.io/v1alpha1/namespaces/default/baremetalhosts/master-2/status
    
  8. Patch the bare metal host spec with payload:

    kubectl patch bmh master-2 --type=merge --patch "$(cat spec_payload.json)"
    
  9. Resume the reconcile:

    kubectl patch bmh master-2 --type=merge --patch '{"metadata":{"annotations":{"baremetalhost.metal3.io/paused":null}}}'
    
  10. Close the terminal to quit kube-proxy and resume Lens.

[17981] Failure to redeploy a bare metal node with RAID 1

Fixed in 2.13.0

Redeployment of a bare metal node with an mdadm-based raid1 enabled fails due to insufficient cleanup of RAID devices.

Workaround:

  1. Boot the affected node from any LiveCD, preferably Ubuntu.

  2. Obtain details about the mdadm RAID devices:

    sudo mdadm --detail --scan --verbose
    
  3. Stop all mdadm RAID devices listed in the output of the above command. For example:

    sudo mdadm --stop /dev/md0
    
  4. Clean up the metadata on partitions with the mdadm RAID device(s) enabled. For example:

    sudo mdadm --zero-superblock /dev/sda1
    

    In the above example, replace /dev/sda1 with partitions listed in the output of the command provided in the step 2.


[17960] Overflow of the Ironic storage volume

Fixed in 2.13.0

On the baremetal-based management clusters with the Container Cloud version 2.12.0 or earlier, the storage volume used by Ironic can run out of free space. As a result, a StackLight alert is triggered for the ironic-aio-pvc volume filling up.

Symptoms

One or more of the following symptoms are observed:

  • The StackLight KubePersistentVolumeUsageCritical alert is firing for the volume ironic-aio-pvc.

  • The ironic and dnsmasq Deployments are not in the OK status:

    kubectl -n kaas get deployments
    
  • One or multiple ironic and dnsmasq pods fail to start:

    • For dnsmasq:

      kubectl get pods -n kaas -o wide | grep dnsmasq
      

      If the number of ready containers for the pod is not 2/2, the management cluster can be affected by the issue.

    • For ironic:

      kubectl get pods -n kaas -o wide | grep ironic
      

      If the number of ready containers for the pod is not 6/6, the management cluster can be affected by the issue.

  • The free space on a volume is less than 10%. To verify space usage on a volume:

    kubectl -n kaas exec -ti deployment/ironic -c ironic-api -- /bin/bash -c 'df -h |grep -i "volume\|size"'
    

    Example of system response where 14% is the used space of a volume:

    Filesystem                 Size  Used Avail Use% Mounted on
    /dev/rbd0                  4.9G  686M  4.2G  14% /volume
    

As a workaround, truncate the log files on the storage volume:

kubectl -n kaas exec -ti deployment/dnsmasq -- /bin/bash -c 'truncate -s 0 /volume/log/ironic/ironic-api.log'
kubectl -n kaas exec -ti deployment/dnsmasq -- /bin/bash -c 'truncate -s 0 /volume/log/ironic/ironic-conductor.log'
kubectl -n kaas exec -ti deployment/dnsmasq -- /bin/bash -c 'truncate -s 0 /volume/log/ironic/ansible-playbook.log'
kubectl -n kaas exec -ti deployment/dnsmasq -- /bin/bash -c 'truncate -s 0 /volume/log/ironic-inspector/ironic-inspector.log'
kubectl -n kaas exec -ti deployment/dnsmasq -- /bin/bash -c 'truncate -s 0 /volume/log/dnsmasq/dnsmasq-dhcpd.log'
kubectl -n kaas exec -ti deployment/dnsmasq -- /bin/bash -c 'truncate -s 0 /volume/log/ambassador/access.log
kubectl -n kaas exec -ti deployment/dnsmasq -- /bin/bash -c 'truncate -s 0 /volume/log/ambassador/error.log

[17792] Full preflight fails with a timeout waiting for BareMetalHost

If you run bootstrap.sh preflight with KAAS_BM_FULL_PREFLIGHT=true, the script fails with the following message:

preflight check failed: preflight full check failed: \
error waiting for BareMetalHosts to power on: \
timed out waiting for the condition

Workaround:

  1. Unset full preflight using the unset KAAS_BM_FULL_PREFLIGHT environment variable.

  2. Rerun bootstrap.sh preflight that executes fast preflight instead.


OpenStack
[10424] Regional cluster cleanup fails by timeout

An OpenStack-based regional cluster cleanup fails with the timeout error.

Workaround:

  1. Wait for the Cluster object to be deleted in the bootstrap cluster:

    kubectl --kubeconfig <(./bin/kind get kubeconfig --name clusterapi) get cluster
    

    The system output must be empty.

  2. Remove the bootstrap cluster manually:

    ./bin/kind delete cluster --name clusterapi
    


vSphere
[14080] Node leaves the cluster after IP address change

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

A vSphere-based management cluster bootstrap fails due to a node leaving the cluster after an accidental IP address change.

The issue may affect a vSphere-based cluster only when IPAM is not enabled and IP addresses assignment to the vSphere virtual machines is done by a DHCP server present in the vSphere network.

By default, a DHCP server keeps lease of the IP address for 30 minutes. Usually, a VM dhclient prolongs such lease by frequent DHCP requests to the server before the lease period ends. The DHCP prolongation request period is always less than the default lease time on the DHCP server, so prolongation usually works. But in case of network issues, for example, when dhclient from the VM cannot reach the DHCP server, or the VM is being slowly powered on for more than the lease time, such VM may lose its assigned IP address. As a result, it obtains a new IP address.

Container Cloud does not support network reconfiguration after the IP of the VM has been changed. Therefore, such issue may lead to a VM leaving the cluster.

Symptoms:

  • One of the nodes is in the NodeNotReady or down state:

    kubectl get nodes -o wide
    docker node ls
    
  • The UCP Swarm manager logs on the healthy manager node contain the following example error:

    docker logs -f ucp-swarm-manager
    
    level=debug msg="Engine refresh failed" id="<docker node ID>|<node IP>: 12376"
    
  • If the affected node is manager:

    • The output of the docker info command contains the following example error:

      Error: rpc error: code = Unknown desc = The swarm does not have a leader. \
      It's possible that too few managers are online. \
      Make sure more than half of the managers are online.
      
    • The UCP controller logs contain the following example error:

      docker logs -f ucp-controller
      
      "warning","msg":"Node State Active check error: \
      Swarm Mode Manager health check error: \
      info: Cannot connect to the Docker daemon at tcp://<node IP>:12376. \
      Is the docker daemon running?
      
  • On the affected node, the IP address on the first interface eth0 does not match the IP address configured in Docker. Verify the Node Address field in the output of the docker info command.

  • The following lines are present in /var/log/messages:

    dhclient[<pid>]: bound to <node IP> -- renewal in 1530 seconds
    

    If there are several lines where the IP is different, the node is affected.

Workaround:

Select from the following options:

  • Bind IP addresses for all machines to their MAC addresses on the DHCP server for the dedicated vSphere network. In this case, VMs receive only specified IP addresses that never change.

  • Remove the Container Cloud node IPs from the IP range on the DHCP server for the dedicated vSphere network and configure the first interface eth0 on VMs with a static IP address.

  • If a managed cluster is affected, redeploy it with IPAM enabled for new machines to be created and IPs to be assigned properly.


LCM
[16146] Stuck kubelet on the Cluster release 5.x.x series

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

Occasionally, kubelet may get stuck on the Cluster release 5.x.x series with different errors in the ucp-kubelet containers leading to the nodes failures. The following error occurs every time when accessing the Kubernetes API server:

an error on the server ("") has prevented the request from succeeding

As a workaround, restart ucp-kubelet on the failed node:

ctr -n com.docker.ucp snapshot rm ucp-kubelet
docker rm -f ucp-kubelet

[6066] Helm releases get stuck in FAILED or UNKNOWN state

Note

The issue affects only Helm v2 releases and is addressed for Helm v3. Starting from Container Cloud 2.19.0, all Helm releases are switched to v3.

During a management, regional, or managed cluster deployment, Helm releases may get stuck in the FAILED or UNKNOWN state although the corresponding machines statuses are Ready in the Container Cloud web UI. For example, if the StackLight Helm release fails, the links to its endpoints are grayed out in the web UI. In the cluster status, providerStatus.helm.ready and providerStatus.helm.releaseStatuses.<releaseName>.success are false.

HelmBundle cannot recover from such states and requires manual actions. The workaround below describes the recovery steps for the stacklight release that got stuck during a cluster deployment. Use this procedure as an example for other Helm releases as required.

Workaround:

  1. Verify the failed release has the UNKNOWN or FAILED status in the HelmBundle object:

    kubectl --kubeconfig <regionalClusterKubeconfigPath> get helmbundle <clusterName> -n <clusterProjectName> -o=jsonpath={.status.releaseStatuses.stacklight}
    
    In the command above and in the steps below, replace the parameters
    enclosed in angle brackets with the corresponding values of your cluster.
    

    Example of system response:

    stacklight:
    attempt: 2
    chart: ""
    finishedAt: "2021-02-05T09:41:05Z"
    hash: e314df5061bd238ac5f060effdb55e5b47948a99460c02c2211ba7cb9aadd623
    message: '[{"occurrence":1,"lastOccurrenceDate":"2021-02-05 09:41:05","content":"error
      updating the release: rpc error: code = Unknown desc = customresourcedefinitions.apiextensions.k8s.io
      \"helmbundles.lcm.mirantis.com\" already exists"}]'
    notes: ""
    status: UNKNOWN
    success: false
    version: 0.1.2-mcp-398
    
  2. Log in to the helm-controller pod console:

    kubectl --kubeconfig <affectedClusterKubeconfigPath> exec -n kube-system -it helm-controller-0 sh -c tiller
    
  3. Download the Helm v3 binary. For details, see official Helm documentation.

  4. Remove the failed release:

    helm delete <failed-release-name>
    

    For example:

    helm delete stacklight
    

    Once done, the release triggers for redeployment.



IAM
[18331] Keycloak admin console menu disappears on ‘Add identity provider’ page

Fixed in 2.18.0

During configuration of an identity provider SAML using the Add identity provider menu of the Keycloak admin console, the page style breaks as well as the Save and Cancel buttons disappear.

Workaround:

  1. Log in to the Keycloak admin console.

  2. In the sidebar menu, switch to the Master realm.

  3. Navigate to Realm Settings > Themes.

  4. In the Admin Console Theme drop-down menu, select keycloak.

  5. Click Save and refresh the browser window to apply the changes.


StackLight
[17771] Watchdog alert missing in Salesforce route

Fixed in 2.13.0

The Watchdog alert is not routed to Salesforce by default.

Note

After applying the workaround, you may notice the following warning message. It is expected and does not affect configuration rendering:

Warning: Merging destination map for chart 'stacklight'. Overwriting table
item 'match', with non table value: []

Workaround:

  1. Open the StackLight configuration manifest as described in StackLight configuration procedure.

  2. In alertmanagerSimpleConfig.salesForce, specify the following configuration:

    alertmanagerSimpleConfig:
      salesForce:
        route:
          match: []
          match_re:
            severity: "informational|critical"
          matchers:
          - severity=~"informational|critical"
    

[19682] URLs in Salesforce alerts use HTTP for IAM with enabled TLS

Fixed in 2.15.0

Prometheus web UI URLs in StackLight notifications sent to Salesforce use a wrong protocol: HTTP instead of HTTPS. The issue affects deployments with TLS enabled for IAM.

The workaround is to manually change the URL protocol in the web browser.


Storage
[20312] Creation of ceph-based PVs gets stuck in Pending state

The csi-rbdplugin-provisioner pod (csi-provisioner container) may show constant retries attempting to create a PV if the csi-rbdplugin-provisioner pod was scheduled and started on a node with no connectivity to the Ceph storage. As a result, creation of a Ceph-based persistent volume (PV) may get stuck in the Pending state.

As a workaround manually specify the affinity or toleration rules for the csi-rbdplugin-provisioner pod.

Workaround:

  1. On the managed cluster, open the rook-ceph-operator-config map for editing:

    kubectl edit configmap -n rook-ceph rook-ceph-operator-config
    
  2. To avoid spawning pods on the nodes where this is not needed, set the provisioner node affinity specifying the required node labels. For example:

    CSI_PROVISIONER_NODE_AFFINITY: "role=storage-node; storage=rook, ceph"
    

Note

If needed, you can also specify CSI_PROVISIONER_TOLERATIONS tolerations. For example:

CSI_PROVISIONER_TOLERATIONS: |
  - effect: NoSchedule
    key: node-role.kubernetes.io/controlplane
    operator: Exists
  - effect: NoExecute
    key: node-role.kubernetes.io/etcd
    operator: Exists
[18879] The RGW pod overrides the global CA bundle with an incorrect mount

Fixed in 2.14.0

During deployment of a Ceph cluster, the RADOS Gateway (RGW) pod overrides the global CA bundle located at /etc/pki/tls/certs with an incorrect self-signed CA bundle. The issue affects only clusters with public certificates.

Workaround:

  1. Open the KaasCephCluster CR of a managed cluster for editing:

    kubectl edit kaascephcluster -n <managedClusterProjectName>
    

    Substitute <managedClusterProjectName> with a corresponding value.

  2. Select from the following options:

    • If you are using the GoDaddy certificates, in the cephClusterSpec.objectStorage.rgw section, replace the cacert parameters with your public CA certificate that already contains both the root CA certificate and intermediate CA certificate:

      cephClusterSpec:
        objectStorage:
          rgw:
            SSLCert:
              cacert: |
                -----BEGIN CERTIFICATE-----
                ca-certificate here
                -----END CERTIFICATE-----
              tlsCert: |
                -----BEGIN CERTIFICATE-----
                private TLS certificate here
                -----END CERTIFICATE-----
              tlsKey: |
                -----BEGIN RSA PRIVATE KEY-----
                private TLS key here
                -----END RSA PRIVATE KEY-----
      
    • If you are using the DigiCert certificates:

      1. Download the <root_CA> from DigiCert.

      2. In the cephClusterSpec.objectStorage.rgw section, replace the cacert parameters with your public intermediate CA certificate along with the root one:

        cephClusterSpec:
          objectStorage:
            rgw:
              SSLCert:
                cacert: |
                  -----BEGIN CERTIFICATE-----
                  <root CA here>
                  <intermediate CA here>
                  -----END CERTIFICATE-----
                tlsCert: |
                  -----BEGIN CERTIFICATE-----
                  private TLS certificate here
                  -----END CERTIFICATE-----
                tlsKey: |
                  -----BEGIN RSA PRIVATE KEY-----
                  private TLS key here
                  -----END RSA PRIVATE KEY-----
        

[16300] ManageOsds works unpredictably on Rook 1.6.8 and Ceph 15.2.13

Affects only Container Cloud 2.11,0, 2.12,0, 2.13.0, and 2.13.1

Ceph LCM automatic operations such as Ceph OSD or Ceph node removal are unstable for the new Rook 1.6.8 and Ceph 15.2.13 (Ceph Octopus) versions and may cause data corruption. Therefore, manageOsds is disabled until further notice.

As a workaround, to safely remove a Ceph OSD or node from a Ceph cluster, perform the steps described in Remove Ceph OSD manually.



Regional cluster
[17359] Deletion of AWS-based regional cluster credential fails

Fixed in 2.13.0

During deletion of an AWS-based regional cluster, deletion of the cluster credential fails with error deleting regional credential: error waiting for credential deletion: timed out waiting for the condition.

Workaround:

  1. Change the directory to kaas-bootstrap.

  2. Scale up the aws-credentials-controller-aws-credentials-controller deployment:

    ./bin/kind get kubeconfig --name clusterapi > kubeconfig-bootstrap
    
    kubectl --kubeconfig kubeconfig-bootstrap scale deployment \
    aws-credentials-controller-aws-credentials-controller \
    --namespace kaas --replicas=1
    
  3. Wait until the affected credential is deleted:

    kubectl --kubeconfig <pathToMgmtClusterKubeconfig> \
    get awscredentials.kaas.mirantis.com -A -l kaas.mirantis.com/region=<regionName>
    

    In the above command, replace:

    • <regionName> with the name of the region where the regional cluster is located.

    • <pathToMgmtClusterKubeconfig> with the path to the corresponding

      management cluster kubeconfig.

    Example of a positive system response:

    No resources found
    
  4. Delete the bootstrap cluster:

    ./bin/kind delete cluster --name clusterapi
    


Upgrade
[18193] Management cluster upgrade fails with Ceph cluster being not ready

Fixed in 2.13.0

An Equinix Metal or baremetal-based management cluster upgrade may fail with the following error message:

Reconcile MiraCeph 'ceph-lcm-mirantis/rook-ceph' failed with error:
failed to ensure cephcluster: failed to ensure cephcluster rook-ceph/rook-ceph:
ceph cluster rook-ceph/rook-ceph is not ready to be updated

Your cluster is affected if:

  1. The rook-ceph/rook-ceph-operator logs contain the following errors:

    Failed to update lock: Internal error occurred:
    unable to unmarshal response in forceLegacy: json:
    cannot unmarshal number into Go value of type bool
    
    Failed to update lock: Internal error occurred:
    unable to perform request for determining if legacy behavior should be forced
    
  2. The kubectl -n rook-ceph get cephcluster command returns the cephcluster resource with the Progressing state.

As a workaround, restart the rook-ceph-operator pod:

kubectl -n rook-ceph delete pod -l app=rook-ceph-operator

[4288] Equinix and MOS managed clusters update failure

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

The Equinix Metal and MOS-based managed clusters may fail to update to the latest Cluster release with kubelet being stuck and reporting authorization errors.

The cluster is affected by the issue if you see the Failed to make webhook authorizer request: context canceled error in the kubelet logs:

docker logs ucp-kubelet --since 5m 2>&1 | grep 'Failed to make webhook authorizer request: context canceled'

As a workaround, restart the ucp-kubelet container on the affected node(s):

ctr -n com.docker.ucp snapshot rm ucp-kubelet
docker rm -f ucp-kubelet

Note

Ignore failures in the output of the first command, if any.


[16379,23865] Cluster update fails with the FailedMount warning

Fixed in 2.19.0

An Equinix-based management or managed cluster fails to update with the FailedAttachVolume and FailedMount warnings.

Workaround:

  1. Verify that the description of the pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    
    • <affectedProjectName> is the Container Cloud project name where the pods failed to run

    • <affectedPodName> is a pod name that failed to run in this project

    In the pod description, identify the node name where the pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the pod that fails to init to 0 replicas.

  4. On every csi-rbdplugin pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state is Running.


[9899] Helm releases get stuck in PENDING_UPGRADE during cluster update

Fixed in 2.14.0

Helm releases may get stuck in the PENDING_UPGRADE status during a management or managed cluster upgrade. The HelmBundle Controller cannot recover from this state and requires manual actions. The workaround below describes the recovery process for the openstack-operator release that stuck during a managed cluster update. Use it as an example for other Helm releases as required.

Workaround:

  1. Log in to the helm-controller pod console:

    kubectl exec -n kube-system -it helm-controller-0 sh -c tiller
    
  2. Identify the release that stuck in the PENDING_UPGRADE status. For example:

    ./helm --host=localhost:44134 history openstack-operator
    

    Example of system response:

    REVISION  UPDATED                   STATUS           CHART                      DESCRIPTION
    1         Tue Dec 15 12:30:41 2020  SUPERSEDED       openstack-operator-0.3.9   Install complete
    2         Tue Dec 15 12:32:05 2020  SUPERSEDED       openstack-operator-0.3.9   Upgrade complete
    3         Tue Dec 15 16:24:47 2020  PENDING_UPGRADE  openstack-operator-0.3.18  Preparing upgrade
    
  3. Roll back the failed release to the previous revision:

    1. Download the Helm v3 binary. For details, see official Helm documentation.

    2. Roll back the failed release:

      helm rollback <failed-release-name>
      

      For example:

      helm rollback openstack-operator 2
      

    Once done, the release will be reconciled.


[18076] StackLight update failure

Fixed in 2.13.0

On a managed cluster with logging disabled, changing NodeSelector can cause StackLight update failure with the following message in the StackLight Helm Controller logs:

Upgrade "stacklight" failed: Job.batch "stacklight-delete-logging-pvcs-*" is invalid: spec.template: Invalid value: ...

As a workaround, disable the stacklight-delete-logging-pvcs-* job.

Workaround:

  1. Open the affected Cluster object for editing:

    kubectl edit cluster <affectedManagedClusterName> -n <affectedManagedClusterProjectName>
    
  2. Set deleteVolumes to false:

    spec:
      ...
      providerSpec:
        ...
        value:
          ...
          helmReleases:
            ...
            - name: stacklight
              values:
                ...
                logging:
                  deleteVolumes: false
                ...
    


Container Cloud web UI
[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.12.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.25.6

aws-credentials-controller

1.25.6

Azure Updated

azure-provider

1.25.6

azure-credentials-controller

1.25.6

Bare metal

baremetal-operator Updated

5.2.1

baremetal-public-api Updated

5.2.1

baremetal-provider Updated

1.25.6

httpd

1.18.0

ironic

victoria-bionic-20210719060025

ironic-operator Updated

base-bionic-20210908110402

kaas-ipam Updated

base-bionic-20210819150000

local-volume-provisioner

1.0.6-mcp

mariadb

10.4.17-bionic-20210617085111

IAM

iam Updated

2.4.8

iam-controller Updated

1.25.6

keycloak

12.0.0

Container Cloud

admission-controller Updated

1.25.6

agent-controller New

1.25.6

byo-credentials-controller Updated

1.25.6

byo-provider Updated

1.25.6

kaas-public-api Updated

1.25.6

kaas-exporter Updated

1.25.6

kaas-ui Updated

1.25.8

lcm-controller Updated

0.3.0-41-g6ecc1974

mcc-cache Updated

1.25.6

proxy-controller Updated

1.25.6

rbac-controller New

1.25.7

release-controller Updated

1.25.6

rhellicense-controller Updated

1.25.6

squid-proxy

0.0.1-5

Equinix Metal Updated

equinix-provider

1.25.6

equinix-credentials-controller

1.25.6

OpenStack Updated

openstack-provider

1.25.6

os-credentials-controller

1.25.6

VMware vSphere Updated

vsphere-provider

1.25.6

vsphere-credentials-controller

1.25.6

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.12.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-5.2.1.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-5.2.1.tgz

ironic-python-agent-bionic.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-victoria-bionic-debug-20210817124316

ironic-python-agent-bionic.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-victoria-bionic-debug-20210817124316

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-5.2.1.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-1.0.6-mcp.tgz

provisioning_ansible Updated

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-79-41e503a.tgz

target ubuntu system

https://binary.mirantis.com/bm/bin/efi/ubuntu/tgz-bionic-20210622161844

Docker images

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-bionic-20210908111623

dnsmasq

mirantis.azurecr.io/general/dnsmasq:focal-20210617094827

httpd

mirantis.azurecr.io/lcm/nginx:1.18.0

ironic

mirantis.azurecr.io/openstack/ironic:victoria-bionic-20210719060025

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:victoria-bionic-20210719060025

ironic-operator Updated

mirantis.azurecr.io/bm/ironic-operator:base-bionic-20210908110402

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20210608113804

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-bionic-20210819150000

mariadb

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20210617085111

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-bionic-20210617094817


Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.25.6.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.25.6.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.25.6.tgz

agent-controller New

https://binary.mirantis.com/core/helm/agent-controller-1.25.6.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.25.6.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.25.6.tgz

azure-credentials-controller Updated

https://binary.mirantis.com/core/helm/azure-credentials-controller-1.25.6.tgz

azure-provider Updated

https://binary.mirantis.com/core/helm/azure-provider-1.25.6.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.25.6.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.25.6.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.25.6.tgz

equinix-credentials-controller

https://binary.mirantis.com/core/helm/equinix-credentials-controller-1.25.6.tgz

equinix-provider

https://binary.mirantis.com/core/helm/equinix-provider-1.25.6.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.25.6.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.25.6.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.25.6.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.25.8.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.25.6.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.25.6.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.25.6.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.25.6.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.25.6.tgz

rbac-controller New

https://binary.mirantis.com/core/helm/rbac-controller-1.25.7.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.25.6.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.25.6.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.25.6.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.25.6.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.25.6.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.25.6

agent-controller New

mirantis.azurecr.io/core/agent-controller:1.25.6

aws-cluster-api-controller Updated

mirantis.azurecr.io/core/aws-cluster-api-controller:1.25.6

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.25.6

azure-cluster-api-controller Updated

mirantis.azurecr.io/core/azure-cluster-api-controller:1.25.6

azure-credentials-controller Updated

mirantis.azurecr.io/core/azure-credentials-controller:1.25.6

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.25.6

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.25.6

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.25.6

cluster-api-provider-equinix Updated

mirantis.azurecr.io/core/cluster-api-provider-equinix:1.25.6

equinix-credentials-controller Updated

mirantis.azurecr.io/core/equinix-credentials-controller:1.25.6

frontend Updated

mirantis.azurecr.io/core/frontend:1.25.8

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.25.6

kproxy Updated

mirantis.azurecr.io/lcm/kproxy:1.25.6

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:v0.3.0-41-g6ecc1974

nginx

mirantis.azurecr.io/lcm/nginx:1.18.0

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.25.6

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.25.6

rbac-controller New

mirantis.azurecr.io/core/rbac-controller:1.25.7

registry

mirantis.azurecr.io/lcm/registry:2.7.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.25.6

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.25.6

squid-proxy Updated

mirantis.azurecr.io/core/squid-proxy:0.0.1-5

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-api-controller:1.25.6

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.25.6


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

iamctl-linux Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.3-linux

iamctl-darwin Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.3-darwin

iamctl-windows Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.3-windows

Helm charts Updated

iam

http://binary.mirantis.com/iam/helm/iam-2.4.8.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.8.tgz

keycloak_proxy

http://binary.mirantis.com/core/helm/keycloak_proxy-1.26.1.tgz

Docker images

api Updated

mirantis.azurecr.io/iam/api:0.5.3

auxiliary Updated

mirantis.azurecr.io/iam/auxiliary:0.5.3

kubernetes-entrypoint Updated

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.0-20200311160233

mariadb

mirantis.azurecr.io/general/mariadb:10.4.16-bionic-20201105025052

keycloak Updated

mirantis.azurecr.io/iam/keycloak:0.5.3

keycloak-gatekeeper Updated

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-2

2.11.0

The Mirantis Container Cloud GA release 2.11.0:

  • Introduces support for the Cluster release 7.1.0 that is based on Mirantis Container Runtime 20.10.5 and Mirantis Kubernetes Engine 3.4.0 with Kubernetes 1.20.

  • Introduces support for the Cluster release 5.18.0 that is based on Mirantis Kubernetes Engine 3.3.6 with Kubernetes 1.18 and Mirantis Container Runtime 20.10.5.

  • Introduces support for the Cluster release 6.18.0 that is based on the Cluster release 5.18.0 and represents Mirantis OpenStack for Kubernetes (MOS) 21.4.

  • Continues supporting the Cluster release 6.16.0 that is based on the Cluster release 5.16.0 and represents Mirantis OpenStack for Kubernetes (MOS) 21.3.

  • Supports deprecated Cluster releases 5.17.0, 6.16.0, and 7.0.0 that will become unsupported in the following Container Cloud releases.

  • Supports the Cluster release 5.11.0 only for attachment of existing MKE 3.3.4 clusters. For the deployment of new or attachment of existing MKE 3.3.6 clusters, the latest available Cluster release is used.

Caution

Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

Caution

Before upgrading an existing managed cluster with StackLight deployed in HA mode to the latest Cluster release, add the StackLight node label to at least 3 worker machines as described in Upgrade managed clusters with StackLight deployed in HA mode. Otherwise, the cluster upgrade will fail.

This section outlines release notes for the Container Cloud release 2.11.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.11.0. For the list of enhancements in the Cluster releases 7.1.0, 6.18.0, and 5.18.0 that are supported by the Container Cloud release 2.11.0, see the Cluster releases (managed).


Support for the Microsoft Azure cloud provider

TECHNOLOGY PREVIEW

Introduced the Technology Preview support for the Microsoft Azure cloud provider, including support for creating and operating of management, regional, and managed clusters.

Note

For the Technology Preview feature definition, refer to Technology Preview features.

RHEL 7.9 bootstrap node for the vSphere-based provider

Implemented the capability to bootstrap the vSphere provider clusters on the bootstrap node that is based on RHEL 7.9.

Validation labels for the vSphere-based VM templates

Implemented validation labels for the vSphere-based VM templates in the Container Cloud web UI. If a VM template was initially created using the built-in Packer mechanism, the Container Cloud version has a green label on the right side of the drop-down list with VM templates. Otherwise, a template is marked with the Unknown label.

Mirantis recommends using only green-labeled templates for production deployments.

Automatic migration of Docker data and LVP volumes to NVMe on AWS clusters

Implemented automatic migration of Docker data located at /var/lib/docker and local provisioner volumes from existing EBS to local NVMe SSDs during the AWS-based management and managed clusters upgrade. On new clusters, the /var/lib/docker Docker data is now located on local NVMe SSDs by default.

The migration allows moving heavy workloads such as etcd and MariaDB to local NVMe SSDs that significantly improves cluster performance.

Switch of core Helm releases from v2 to v3

Upgraded all core Helm releases in the ClusterRelease and KaasRelease objects from v2 to v3. Switching of the remaining Helm releases to v3 will be implemented in one of the following Container Cloud releases.

Bond interfaces for baremetal-based management clusters

Added the possibility to configure L2 templates for the baremetal-based management cluster to set up a bond network interface to the PXE/Management network.

Apply this configuration to the bootstrap templates before you run the bootstrap script to deploy the management cluster.

Caution

  • Using this configuration requires that every host in your management cluster has at least two physical interfaces.

  • Connect at least two interfaces per host to an Ethernet switch that supports Link Aggregation Control Protocol (LACP) port groups and LACP fallback.

  • Configure an LACP group on the ports connected to the NICs of a host.

  • Configure the LACP fallback on the port group to ensure that the host can boot over the PXE network before the bond interface is set up on the host operating system.

  • Configure server BIOS for both NICs of a bond to be PXE-enabled.

  • If the server does not support booting from multiple NICs, configure the port of the LACP group that is connected to the PXE-enabled NIC of a server to be primary port. With this setting, the port becomes active in the fallback mode.

Bare metal advanced configuration using web UI

Implemented the following amendments for bare metal advanced configuration in the Container Cloud web UI:

  • On the Cluster page, added the Subnets section with a list of available subnets.

  • Added the Add new subnet wizard.

  • Renamed the BareMetal tab to BM Hosts.

  • Added the BM Host Profiles tab that contains a list of custom bare metal host profiles, if any.

  • Added the BM Host Profile drop-down list to the Create new machine wizard.

Equinix Metal capacity labels for machines in web UI

Implemented the verification mechanism for the actual capacity of the Equinix Metal facilities before machines deployment. Now, you can see the following labels in the Equinix Metal Create a machine wizard of the Container Cloud web UI:

  • Normal - the facility has a lot of available machines. Prioritize this machine type over others.

  • Limited - the facility has a limited number of machines. Do not request many machines of this type.

  • Unknown - Container Cloud cannot fetch information about the capacity level since the feature is disabled.

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added a procedure on how to update the Keycloak IP address on bare metal clusters.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.11.0 along with the Cluster releases 7.1.0, 6.18.0, and 5.18.0.

For more issues addressed for the Cluster release 6.18.0, see also addressed issues 2.10.0.

  • [15698][vSphere] Fixed the issue with a load balancer virtual IP address (VIP) being assigned to each manager node on any type of the vSphere-based cluster.

  • [7573][Ceph] To avoid the Rook community issue with updating Rook to version 1.6, added the rgw_data_log_backing configuration option set to omap by default.

  • [10050][Ceph] Fixed the issue with Ceph OSD pod being stuck in the CrashLoopBackOff state due to the Ceph OSD authorization key failing to be created properly after disk replacement if a custom BareMetalHostProfile was used.

  • [16233][Ceph][Upgrade] Fixed the issue with ironic and dnsmasq pods failing during a baremetal-based management cluster upgrade due to Ceph not unmounting RBD volumes.

  • [7655][BM] Fixed the issue with a bare metal cluster to be deployed successfully but with the runtime errors in the IpamHost object if an L2 template was configured incorrectly.

  • [15348][StackLight] Fixed the issue with some panels of the Alertmanager and Prometheus Grafana dashboards not displaying data due to an invalid query.

  • [15834][StackLight] Removed the CPU resource limit from the elasticsearch-curator container to avoid issues with the CPUThrottlingHigh alert false-positively firing for Elasticsearch Curator.

  • [16141][StackLight] Fixed the issue with the Alertmanager pod getting stuck in CrashLoopBackOff during upgrade of a management, regional, or managed cluster and thus causing upgrade failure with the Loading configuration file failed error message in logs.

  • [15766][StackLight][Upgrade] Fixed the issue with management or regional cluster upgrade failure from version 2.9.0 to 2.10.0 and managed cluster from 5.16.0 to 5.17.0 with the Cannot evict pod error for the patroni-12-0, patroni-12-1, or patroni-12-2 pod.

  • [16398][StackLight] Fixed the issue with inability to set require_tls to false for Alertmanager email notifications.

  • [13303] [LCM] Fixed the issue with managed clusters update from the Cluster release 6.12.0 to 6.14.0 failing with worker nodes being stuck in the Deploy state with the Network is unreachable error.

  • [13845] [LCM] Fixed the issue with the LCM Agent upgrade failing with x509 error during managed clusters update from the Cluster release 6.12.0 to 6.14.0.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.11.0 including the Cluster releases 7.1.0, 6.18.0, and 5.18.0.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


AWS
[8013] Managed cluster deployment requiring PVs may fail

Fixed in the Cluster release 7.0.0

Note

The issue below affects only the Kubernetes 1.18 deployments. Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

On a management cluster with multiple AWS-based managed clusters, some clusters fail to complete the deployments that require persistent volumes (PVs), for example, Elasticsearch. Some of the affected pods get stuck in the Pending state with the pod has unbound immediate PersistentVolumeClaims and node(s) had volume node affinity conflict errors.

Warning

The workaround below applies to HA deployments where data can be rebuilt from replicas. If you have a non-HA deployment, back up any existing data before proceeding, since all data will be lost while applying the workaround.

Workaround:

  1. Obtain the persistent volume claims related to the storage mounts of the affected pods:

    kubectl get pod/<pod_name1> pod/<pod_name2> \
    -o jsonpath='{.spec.volumes[?(@.persistentVolumeClaim)].persistentVolumeClaim.claimName}'
    

    Note

    In the command above and in the subsequent steps, substitute the parameters enclosed in angle brackets with the corresponding values.

  2. Delete the affected Pods and PersistentVolumeClaims to reschedule them: For example, for StackLight:

    kubectl -n stacklight delete \
    
      pod/<pod_name1> pod/<pod_name2> ...
      pvc/<pvc_name2> pvc/<pvc_name2> ...
    


Equinix Metal
[16718] Equinix Metal provider fails to create machines with SSH keys error

Fixed in 2.12.0

If an Equinix Metal based cluster is being deployed in an Equinix Metal project with no SSH keys, the Equinix Metal provider fails to create machines with the following error:

Failed to create machine "kaas-mgmt-controlplane-0"...
failed to create device: POST https://api.equinix.com/metal/v1/projects/...
<deviceID> must have at least one SSH key or explicitly send no_ssh_keys option

Workaround:

  1. Create a new SSH key.

  2. Log in to the Equinix Metal console.

  3. In Project Settings, click Project SSH Keys.

  4. Click Add New Key and add details of the newly created SSH key.

  5. Click Add.

  6. Restart the cluster deployment.


Bare metal
[17118] Failure to add a new machine to cluster

Fixed in 2.12.0

Adding a new machine to a baremetal-based managed cluster may fail after the baremetal-based management cluster upgrade. The issue occurs because the PXE boot is not working for the new node. In this case, file /volume/tftpboot/ipxe.efi not found logs appear on dnsmasq-tftp.

Workaround:

  1. Log in to a local machine where your management cluster kubeconfig is located and where kubectl is installed.

  2. Scale the Ironic deployment down to 0 replicas.

    kubectl -n kaas scale deployments/ironic --replicas=0
    
  3. Scale the Ironic deployment up to 1 replica:

    kubectl -n kaas scale deployments/ironic --replicas=1
    


OpenStack
[16959] Proxy-based regional cluster creation fails

Fixed in 2.12.0

An OpenStack-based regional cluster being deployed using proxy fails with the Not ready objects: not ready: statefulSets: kaas/mcc-cache got 0/1 replicas error message due to the issue with the proxy secret creation.

Workaround:

  1. Run the following command:

    kubectl get secret -n kube-system mke-proxy-secret -o yaml | sed '/namespace.*/d' | kubectl create -n kaas -f -
    
  2. Rerun the bootstrap script:

    ./bootstrap.sh deploy_regional
    

[10424] Regional cluster cleanup fails by timeout

An OpenStack-based regional cluster cleanup fails with the timeout error.

Workaround:

  1. Wait for the Cluster object to be deleted in the bootstrap cluster:

    kubectl --kubeconfig <(./bin/kind get kubeconfig --name clusterapi) get cluster
    

    The system output must be empty.

  2. Remove the bootstrap cluster manually:

    ./bin/kind delete cluster --name clusterapi
    


vSphere
[14458] Failure to create a container for pod: cannot allocate memory

Fixed in 2.9.0 for new clusters

Newly created pods may fail to run and have the CrashLoopBackOff status on long-living Container Cloud clusters deployed on RHEL 7.8 using the VMware vSphere provider. The following is an example output of the kubectl describe pod <pod-name> -n <projectName> command:

State:        Waiting
Reason:       CrashLoopBackOff
Last State:   Terminated
Reason:       ContainerCannotRun
Message:      OCI runtime create failed: container_linux.go:349:
              starting container process caused "process_linux.go:297:
              applying cgroup configuration for process caused
              "mkdir /sys/fs/cgroup/memory/kubepods/burstable/<pod-id>/<container-id>>:
              cannot allocate memory": unknown

The issue occurs due to the Kubernetes and Docker community issues.

According to the RedHat solution, the workaround is to disable the kernel memory accounting feature by appending cgroup.memory=nokmem to the kernel command line.

Note

The workaround below applies to the existing clusters only. The issue is resolved for new Container Cloud 2.9.0 deployments since the workaround below automatically applies to the VM template built during the vSphere-based management cluster bootstrap.

Apply the following workaround on each machine of the affected cluster.

Workaround

  1. SSH to any machine of the affected cluster using mcc-user and the SSH key provided during the cluster creation to proceed as the root user.

  2. In /etc/default/grub, set cgroup.memory=nokmem for GRUB_CMDLINE_LINUX.

  3. Update kernel:

    yum install kernel kernel-headers kernel-tools kernel-tools-libs kexec-tools
    
  4. Update the grub configuration:

    grub2-mkconfig -o /boot/grub2/grub.cfg
    
  5. Reboot the machine.

  6. Wait for the machine to become available.

  7. Wait for 5 minutes for Docker and Kubernetes services to start.

  8. Verify that the machine is Ready:

    docker node ls
    kubectl get nodes
    
  9. Repeat the steps above on the remaining machines of the affected cluster.


[14080] Node leaves the cluster after IP address change

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

A vSphere-based management cluster bootstrap fails due to a node leaving the cluster after an accidental IP address change.

The issue may affect a vSphere-based cluster only when IPAM is not enabled and IP addresses assignment to the vSphere virtual machines is done by a DHCP server present in the vSphere network.

By default, a DHCP server keeps lease of the IP address for 30 minutes. Usually, a VM dhclient prolongs such lease by frequent DHCP requests to the server before the lease period ends. The DHCP prolongation request period is always less than the default lease time on the DHCP server, so prolongation usually works. But in case of network issues, for example, when dhclient from the VM cannot reach the DHCP server, or the VM is being slowly powered on for more than the lease time, such VM may lose its assigned IP address. As a result, it obtains a new IP address.

Container Cloud does not support network reconfiguration after the IP of the VM has been changed. Therefore, such issue may lead to a VM leaving the cluster.

Symptoms:

  • One of the nodes is in the NodeNotReady or down state:

    kubectl get nodes -o wide
    docker node ls
    
  • The UCP Swarm manager logs on the healthy manager node contain the following example error:

    docker logs -f ucp-swarm-manager
    
    level=debug msg="Engine refresh failed" id="<docker node ID>|<node IP>: 12376"
    
  • If the affected node is manager:

    • The output of the docker info command contains the following example error:

      Error: rpc error: code = Unknown desc = The swarm does not have a leader. \
      It's possible that too few managers are online. \
      Make sure more than half of the managers are online.
      
    • The UCP controller logs contain the following example error:

      docker logs -f ucp-controller
      
      "warning","msg":"Node State Active check error: \
      Swarm Mode Manager health check error: \
      info: Cannot connect to the Docker daemon at tcp://<node IP>:12376. \
      Is the docker daemon running?
      
  • On the affected node, the IP address on the first interface eth0 does not match the IP address configured in Docker. Verify the Node Address field in the output of the docker info command.

  • The following lines are present in /var/log/messages:

    dhclient[<pid>]: bound to <node IP> -- renewal in 1530 seconds
    

    If there are several lines where the IP is different, the node is affected.

Workaround:

Select from the following options:

  • Bind IP addresses for all machines to their MAC addresses on the DHCP server for the dedicated vSphere network. In this case, VMs receive only specified IP addresses that never change.

  • Remove the Container Cloud node IPs from the IP range on the DHCP server for the dedicated vSphere network and configure the first interface eth0 on VMs with a static IP address.

  • If a managed cluster is affected, redeploy it with IPAM enabled for new machines to be created and IPs to be assigned properly.


LCM
[16146] Stuck kubelet on the Cluster release 5.x.x series

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

Occasionally, kubelet may get stuck on the Cluster release 5.x.x series with different errors in the ucp-kubelet containers leading to the nodes failures. The following error occurs every time when accessing the Kubernetes API server:

an error on the server ("") has prevented the request from succeeding

As a workaround, restart ucp-kubelet on the failed node:

ctr -n com.docker.ucp snapshot rm ucp-kubelet
docker rm -f ucp-kubelet

[8367] Adding of a new manager node to a managed cluster hangs on Deploy stage

Fixed in 2.12.0

Adding of a new manager node to a managed cluster may hang due to issues with joining etcd from a new node to the existing etcd cluster. The new manager node hangs in the Deploy stage.

Symptoms:

  • The Ansible run tries executing the Wait for Docker UCP to be accessible step and fails with the following error message:

    Status code was -1 and not [200]: Request failed: <urlopen error [Errno 111] Connection refused>
    
  • The etcd logs on the leader etcd node contain the following example error message occurring every 1-2 minutes:

    2021-06-10 03:21:53.196677 W | etcdserver: not healthy for reconfigure,
    rejecting member add {ID:22bb1d4275f1c5b0 RaftAttributes:{PeerURLs:[https://<new manager IP>:12380]
    IsLearner:false} Attributes:{Name: ClientURLs:[]}}
    
    • To determine the etcd leader, run on any manager node:

      docker exec -it ucp-kv sh
      # From the inside of the container:
      ETCDCTL_API=3 etcdctl -w table --endpoints=https://<1st manager IP>:12379,https://<2nd manager IP>:12379,https://<3rd manager IP>:12379 endpoint status
      
    • To verify logs on the leader node:

      docker logs ucp-kv
      

Root cause:

In case of an unlucky network partition, the leader may lose quorum and members are not able to perform the election. For more details, see Official etcd documentation: Learning, figure 5.

Workaround:

  1. Restart etcd on the leader node:

    docker rm -f ucp-kv
    
  2. Wait several minutes until the etcd cluster starts and reconciles.

    The deployment of the new manager node will proceed and it will join the etcd cluster. After that, other MKE components will be configured and the node deployment will be finished successfully.


[6066] Helm releases get stuck in FAILED or UNKNOWN state

Note

The issue affects only Helm v2 releases and is addressed for Helm v3. Starting from Container Cloud 2.19.0, all Helm releases are switched to v3.

During a management, regional, or managed cluster deployment, Helm releases may get stuck in the FAILED or UNKNOWN state although the corresponding machines statuses are Ready in the Container Cloud web UI. For example, if the StackLight Helm release fails, the links to its endpoints are grayed out in the web UI. In the cluster status, providerStatus.helm.ready and providerStatus.helm.releaseStatuses.<releaseName>.success are false.

HelmBundle cannot recover from such states and requires manual actions. The workaround below describes the recovery steps for the stacklight release that got stuck during a cluster deployment. Use this procedure as an example for other Helm releases as required.

Workaround:

  1. Verify the failed release has the UNKNOWN or FAILED status in the HelmBundle object:

    kubectl --kubeconfig <regionalClusterKubeconfigPath> get helmbundle <clusterName> -n <clusterProjectName> -o=jsonpath={.status.releaseStatuses.stacklight}
    
    In the command above and in the steps below, replace the parameters
    enclosed in angle brackets with the corresponding values of your cluster.
    

    Example of system response:

    stacklight:
    attempt: 2
    chart: ""
    finishedAt: "2021-02-05T09:41:05Z"
    hash: e314df5061bd238ac5f060effdb55e5b47948a99460c02c2211ba7cb9aadd623
    message: '[{"occurrence":1,"lastOccurrenceDate":"2021-02-05 09:41:05","content":"error
      updating the release: rpc error: code = Unknown desc = customresourcedefinitions.apiextensions.k8s.io
      \"helmbundles.lcm.mirantis.com\" already exists"}]'
    notes: ""
    status: UNKNOWN
    success: false
    version: 0.1.2-mcp-398
    
  2. Log in to the helm-controller pod console:

    kubectl --kubeconfig <affectedClusterKubeconfigPath> exec -n kube-system -it helm-controller-0 sh -c tiller
    
  3. Download the Helm v3 binary. For details, see official Helm documentation.

  4. Remove the failed release:

    helm delete <failed-release-name>
    

    For example:

    helm delete stacklight
    

    Once done, the release triggers for redeployment.



IAM
[13385] MariaDB pods fail to start after SST sync

Fixed in 2.12.0

The MariaDB pods fail to start after MariaDB blocks itself during the State Snapshot Transfers sync.

Workaround:

  1. Verify the failed pod readiness:

    kubectl describe pod -n kaas <failedMariadbPodName>
    

    If the readiness probe failed with the WSREP not synced message, proceed to the next step. Otherwise, assess the MariaDB pod logs to identify the failure root cause.

  2. Obtain the MariaDB admin password:

    kubectl get secret -n kaas mariadb-dbadmin-password -o jsonpath='{.data.MYSQL_DBADMIN_PASSWORD}' | base64 -d ; echo
    
  3. Verify that wsrep_local_state_comment is Donor or Desynced:

    kubectl exec -it -n kaas <failedMariadbPodName> -- mysql -uroot -p<mariadbAdminPassword> -e "SHOW status LIKE \"wsrep_local_state_comment\";"
    
  4. Restart the failed pod:

    kubectl delete pod -n kaas <failedMariadbPodName>
    

[18331] Keycloak admin console menu disappears on ‘Add identity provider’ page

Fixed in 2.18.0

During configuration of an identity provider SAML using the Add identity provider menu of the Keycloak admin console, the page style breaks as well as the Save and Cancel buttons disappear.

Workaround:

  1. Log in to the Keycloak admin console.

  2. In the sidebar menu, switch to the Master realm.

  3. Navigate to Realm Settings > Themes.

  4. In the Admin Console Theme drop-down menu, select keycloak.

  5. Click Save and refresh the browser window to apply the changes.


StackLight
[16843] Inability to override default route matchers for Salesforce notifier

Fixed in 2.12.0

It may be impossible to override the default route matchers for Salesforce notifier.

Note

After applying the workaround, you may notice the following warning message. It is expected and does not affect configuration rendering:

Warning: Merging destination map for chart 'stacklight'. Overwriting table
item 'match', with non table value: []

Workaround:

  1. Open the StackLight configuration manifest as described in StackLight configuration procedure.

  2. In alertmanagerSimpleConfig.salesForce, specify the following configuration:

    alertmanagerSimpleConfig:
      salesForce:
        route:
          match: []
          match_re:
            your_matcher_key1: your_matcher_value1
            your_matcher_key2: your_matcher_value2
            ...
    

[17771] Watchdog alert missing in Salesforce route

Fixed in 2.13.0

The Watchdog alert is not routed to Salesforce by default.

Note

After applying the workaround, you may notice the following warning message. It is expected and does not affect configuration rendering:

Warning: Merging destination map for chart 'stacklight'. Overwriting table
item 'match', with non table value: []

Workaround:

  1. Open the StackLight configuration manifest as described in StackLight configuration procedure.

  2. In alertmanagerSimpleConfig.salesForce, specify the following configuration:

    alertmanagerSimpleConfig:
      salesForce:
        route:
          match: []
          match_re:
            severity: "informational|critical"
          matchers:
          - severity=~"informational|critical"
    


Storage
[16300] ManageOsds works unpredictably on Rook 1.6.8 and Ceph 15.2.13

Affects only Container Cloud 2.11,0, 2.12,0, 2.13.0, and 2.13.1

Ceph LCM automatic operations such as Ceph OSD or Ceph node removal are unstable for the new Rook 1.6.8 and Ceph 15.2.13 (Ceph Octopus) versions and may cause data corruption. Therefore, manageOsds is disabled until further notice.

As a workaround, to safely remove a Ceph OSD or node from a Ceph cluster, perform the steps described in Remove Ceph OSD manually.



Bootstrap
[16873] Bootstrap fails with ‘failed to establish connection with tiller’ error

Fixed in 2.12.0

If the latest Ubuntu 18.04 image, for example, with kernel 4.15.0-153-generic, is installed on the bootstrap node, a management cluster bootstrap fails during the setup of the Kubernetes cluster by kind.

The issue occurs since the kind version 0.9.0 delivered with the bootstrap script is not compatible with the latest Ubuntu 18.04 image that requires kind version 0.11.1.

To verify that the bootstrap node is affected by the issue:

  1. In the bootstrap script stdout, verify the connection to Tiller.

    Example of system response extract on an affected bootstrap node:

    clusterdeployer.go:164] Initialize Tiller in bootstrap cluster.
    bootstrap_create.go:64] unable to initialize Tiller in bootstrap cluster: \
    failed to establish connection with tiller
    
  2. In the bootstrap script stdout, identify the step after which the bootstrap process fails.

    Example of system response extract on an affected bootstrap node:

    clusterdeployer.go:128] Connecting to bootstrap cluster
    
  3. In the kind cluster, verify the kube-proxy service readiness:

    ./bin/kind get kubeconfig --name clusterapi > /tmp/kind_kubeconfig.yaml
    
    ./bin/kubectl --kubeconfig /tmp/kind_kubeconfig.yaml get po -n kube-system | grep kube-proxy
    
    ./bin/kubectl --kubeconfig /tmp/kind_kubeconfig.yaml-n kube-system logs kube-proxy-<podPostfixID>
    

    Example of the kube-proxy service stdout extract on an affected bootstrap node:

    I0831 11:56:16.139300  1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
    F0831 11:56:16.139313  1 server.go:497] open /proc/sys/net/netfilter/nf_conntrack_max: permission denied
    

If the verification steps below are positive, proceed with the workaround below.

Workaround:

  1. Clean up the bootstrap cluster:

    ./bin/kind delete cluster --name clusterapi
    
  2. Upgrade the kind binary to version 0.11.1:

    curl -L https://github.com/kubernetes-sigs/kind/releases/download/v0.11.1/kind-linux-amd64 -o bin/kind
    
    chmod a+x bin/kind
    
  3. Restart the bootstrap script:

    ./bootstrap.sh all
    


Upgrade
[17477] StackLight in HA mode is not deployed or cluster update is blocked

Fixed in 2.12.0

The deployment of new managed clusters using the Cluster release 6.18.0 with StackLight enabled in the HA mode on control plane nodes does not have StackLight deployed. The update of existing clusters with such StackLight configuration that were created using the Cluster release 6.16.0 is blocked with the following error message:

cluster release version upgrade is forbidden: \
Minimum number of worker machines with StackLight label is 3

Workaround:

  1. On the affected managed cluster:

    1. Create a key-value pair that will be used as a unique label on the cluster nodes. In our example, it is forcedRole: stacklight.

      To verify the labels names that already exist on the cluster nodes:

      kubectl get nodes --show-labels
      
    2. Add the new label to the target nodes for StackLight. For example, to the Kubernetes master nodes:

      kubectl label nodes --selector=node-role.kubernetes.io/master forcedRole=stacklight
      
    3. Verify that the new label is added:

      kubectl get nodes --show-labels
      
  2. On the related management cluster:

    1. Configure nodeSelector for the StackLight components by modifying the affected Cluster object:

      kubectl edit cluster <affectedManagedClusterName> -n <affectedManagedClusterProjectName>
      

      For example:

      spec:
        ...
        providerSpec:
          ...
          value:
            ...
            helmReleases:
              ...
              - name: stacklight
                values:
                  ...
                  nodeSelector:
                    default:
                      forcedRole: stacklight
      
    2. Select from the following options:

      • If you faced the issue during a managed cluster deployment, skip this step.

      • If you faced the issue during a managed cluster update, wait until all StackLight components resources are recreated on the target nodes with updated node selectors.

        To monitor the cluster status:

        kubectl get cluster <affectedManagedClusterName> -n <affectedManagedClusterProjectName> -o jsonpath='{.status.providerStatus.conditions[?(@.type=="StackLight")]}' | jq
        

        In the cluster status, verify that the elasticsearch-master and prometheus-server resources are ready. The process can take up to 30 minutes.

        Example of a negative system response:

        {
          "message": "not ready: statefulSets: stacklight/elasticsearch-master got 2/3 replicas",
          "ready": false,
          "type": "StackLight"
        }
        
  3. In the Container Cloud web UI, add a fake StackLight label to any 3 worker nodes to satisfy the deployment requirement as described in Create a machine using web UI. Eventually, StackLight will be still placed on the target nodes with the forcedRole: stacklight label.

    Once done, the StackLight deployment or update proceeds.


[17412] Cluster upgrade fails on the KaaSCephCluster CRD update

An upgrade of a bare metal or Equinix metal based management cluster originally deployed using the Container Cloud release earlier than 2.8.0 fails with the following error message:

Upgrade "kaas-public-api" failed: \
cannot patch "kaascephclusters.kaas.mirantis.com" with kind \
CustomResourceDefinition: CustomResourceDefinition.apiextensions.k8s.io \
kaascephclusters.kaas.mirantis.com" is invalid: \
spec.preserveUnknownFields: Invalid value: true: \
must be false in order to use defaults in the schema

Workaround:

  1. Change the preserveUnknownFields value for the KaaSCephCluster CRD to false:

    kubectl patch crd kaascephclusters.kaas.mirantis.com -p '{"spec":{"preserveUnknownFields":false}}'
    
  2. Upgrade kaas-public-api:

    helm -n kaas upgrade kaas-public-api https://binary.mirantis.com/core/helm/kaas-public-api-1.24.6.tgz --reuse-values
    

[17069] Cluster upgrade fails with the ‘Failed to configure Ceph cluster’ error

Fixed in 2.12.0

An upgrade of a bare metal or Equinix Metal based management or managed cluster fails with the following exemplary error messages:

- message: 'Failed to configure Ceph cluster: ceph cluster verification is failed:
  [PG_AVAILABILITY: Reduced data availability: 33 pgs inactive, OSD_DOWN: 3 osds
  down, OSD_HOST_DOWN: 3 hosts (3 osds) down, OSD_ROOT_DOWN: 1 root (3 osds) down,
  Not all Osds are up]'

- message: 'not ready: deployments: kaas/dnsmasq got 0/1 replicas, kaas/ironic got
    0/1 replicas, rook-ceph/rook-ceph-osd-0 got 0/1 replicas, rook-ceph/rook-ceph-osd-1
    got 0/1 replicas, rook-ceph/rook-ceph-osd-2 got 0/1 replicas; statefulSets: kaas/httpd
    got 0/1 replicas, kaas/mariadb got 0/1 replicas'
  ready: false
  type: Kubernetes

The cluster is affected by the issue if it has different Ceph versions installed:

kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l app=rook-ceph-tools -o name) -- ceph versions

Example of system response:

"mon": {
    "ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)": 3
},
"mgr": {
    "ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)": 1
},
"osd": {
    "ceph version 14.2.19 (bb796b9b5bab9463106022eef406373182465d11) nautilus (stable)": 3
},
"mds": {},
"overall": {
    "ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)": 4
    "ceph version 14.2.19 (bb796b9b5bab9463106022eef406373182465d11) nautilus (stable)": 3
}

Additionally, the output may display no Ceph OSDs:

  "mon": {
    "ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)": 3
  },
  "mgr": {
    "ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)": 1
  },
  "osd": {},
  "mds": {},
  "overall": {
    "ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)": 4
  }

Workaround:

  1. Manually update the image of each rook-ceph-osd deployment to http://mirantis.azurecr.io/ceph/ceph:v15.2.13:

    kubectl -n rook-ceph edit deploy rook-ceph-osd-<i>
    

    In the system output, grep 14.2.19 and replace with 15.2.13.

  2. Verify that all OSDs for all rook-ceph-osd deployments have the 15.2.13 image version:

    kubectl -n rook-ceph get pod -l app=rook-ceph-osd -o jsonpath='{range .items[*]}{@.metadata.name}{" "}{@.spec.containers[0].image}{"\n"}{end}'
    
  3. Restart the rook-ceph-operator pod:

    kubectl -n rook-ceph delete pod -l app=rook-ceph-operator
    

[17007] False-positive ‘release: “squid-proxy” not found’ error

Fixed in 2.12.0

During a management cluster upgrade of any supported cloud provider except vSphere, you may notice the following false-positive messages for the squid-proxy Helm release that is disabled in Container Cloud 2.11.0:

Helm charts not installed yet: squid-proxy

Error: release: "squid-proxy" not found

Ignore these errors for any cloud provider except vSphere that continues using squid-proxy in Container Cloud 2.11.0.


[16964] Management cluster upgrade gets stuck

Fixed in 2.12.0

Management cluster upgrade may get stuck and then fail with the following error message: ClusterWorkloadLocks in cluster default/kaas-mgmt are still active - ceph-clusterworkloadlock.

To verify that the cluster is affected:

  1. Enter the ceph-tools pod.

  2. Verify that some Ceph daemons were not upgraded to Octopus:

    ceph versions
    
  3. Run ceph -s and verify that the output contains the following health warning:

    mons are allowing insecure global_id reclaim
    clients are allowing insecure global_id reclaim
    

If the upgrade is stuck, some Ceph daemons are stuck on upgrade to Octopus, and the health warning above is present, perform the following steps.

Workaround:

  1. Run the following commands:

    ceph config set global mon_warn_on_insecure_global_id_reclaim false
    ceph config set global mon_warn_on_insecure_global_id_reclaim_allowed false
    
  2. Exit the ceph-tools pod.

  3. Restart the rook-ceph-operator pod:

    kubectl -n rook-ceph delete app=rook-ceph-operator
    

[16777] Cluster update fails due to Patroni being not ready

Fixed in 2.12.0

An update of the Container Cloud management, regional, or managed cluster of any cloud provider type from the Cluster release 7.0.0 to 7.1.0 fails due to the failed Patroni pod.

As a workaround, increase the default resource requests and limits for PostgreSQL as follows:

resources:
  postgresql:
    requests:
      cpu: "256m"
      memory: "1Gi"
    limits:
      cpu: "512m"
      memory: "2Gi"

For details, see MOSK Operations Guide: StackLight configuration parameters - Resource limits.


[16379,23865] Cluster update fails with the FailedMount warning

Fixed in 2.19.0

An Equinix-based management or managed cluster fails to update with the FailedAttachVolume and FailedMount warnings.

Workaround:

  1. Verify that the description of the pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    
    • <affectedProjectName> is the Container Cloud project name where the pods failed to run

    • <affectedPodName> is a pod name that failed to run in this project

    In the pod description, identify the node name where the pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the pod that fails to init to 0 replicas.

  4. On every csi-rbdplugin pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state is Running.


[9899] Helm releases get stuck in PENDING_UPGRADE during cluster update

Fixed in 2.14.0

Helm releases may get stuck in the PENDING_UPGRADE status during a management or managed cluster upgrade. The HelmBundle Controller cannot recover from this state and requires manual actions. The workaround below describes the recovery process for the openstack-operator release that stuck during a managed cluster update. Use it as an example for other Helm releases as required.

Workaround:

  1. Log in to the helm-controller pod console:

    kubectl exec -n kube-system -it helm-controller-0 sh -c tiller
    
  2. Identify the release that stuck in the PENDING_UPGRADE status. For example:

    ./helm --host=localhost:44134 history openstack-operator
    

    Example of system response:

    REVISION  UPDATED                   STATUS           CHART                      DESCRIPTION
    1         Tue Dec 15 12:30:41 2020  SUPERSEDED       openstack-operator-0.3.9   Install complete
    2         Tue Dec 15 12:32:05 2020  SUPERSEDED       openstack-operator-0.3.9   Upgrade complete
    3         Tue Dec 15 16:24:47 2020  PENDING_UPGRADE  openstack-operator-0.3.18  Preparing upgrade
    
  3. Roll back the failed release to the previous revision:

    1. Download the Helm v3 binary. For details, see official Helm documentation.

    2. Roll back the failed release:

      helm rollback <failed-release-name>
      

      For example:

      helm rollback openstack-operator 2
      

    Once done, the release will be reconciled.


[18076] StackLight update failure

Fixed in 2.13.0

On a managed cluster with logging disabled, changing NodeSelector can cause StackLight update failure with the following message in the StackLight Helm Controller logs:

Upgrade "stacklight" failed: Job.batch "stacklight-delete-logging-pvcs-*" is invalid: spec.template: Invalid value: ...

As a workaround, disable the stacklight-delete-logging-pvcs-* job.

Workaround:

  1. Open the affected Cluster object for editing:

    kubectl edit cluster <affectedManagedClusterName> -n <affectedManagedClusterProjectName>
    
  2. Set deleteVolumes to false:

    spec:
      ...
      providerSpec:
        ...
        value:
          ...
          helmReleases:
            ...
            - name: stacklight
              values:
                ...
                logging:
                  deleteVolumes: false
                ...
    


Container Cloud web UI
[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.11.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.24.6

aws-credentials-controller

1.24.6

Azure New

azure-provider

1.24.6

azure-credentials-controller

1.24.6

Bare metal

baremetal-operator Updated

5.1.0

baremetal-public-api Updated

5.1.0

baremetal-provider Updated

1.24.6

httpd

1.18.0

ironic Updated

victoria-bionic-20210719060025

ironic-operator Updated

base-bionic-20210726193746

kaas-ipam Updated

base-bionic-20210729185610

local-volume-provisioner

1.0.6-mcp

mariadb

10.4.17-bionic-20210617085111

IAM

iam

2.4.2

iam-controller Updated

1.24.6

keycloak

12.0.0

Container Cloud

admission-controller Updated

1.24.8

byo-credentials-controller Updated

1.24.6

byo-provider Updated

1.24.6

kaas-public-api Updated

1.24.6

kaas-exporter Updated

1.24.6

kaas-ui Updated

1.24.7

lcm-controller Updated

0.2.0-404-g7f77e62c

mcc-cache Updated

1.24.6

proxy-controller Updated

1.24.6

release-controller Updated

1.24.6

rhellicense-controller Updated

1.24.6

squid-proxy

0.0.1-5

Equinix Metal Updated

equinix-provider

1.24.6

equinix-credentials-controller

1.24.6

OpenStack Updated

openstack-provider

1.24.6

os-credentials-controller

1.24.6

VMware vSphere Updated

vsphere-provider

1.24.6

vsphere-credentials-controller

1.24.6

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.11.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-5.1.0.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-5.1.0.tgz

ironic-python-agent-bionic.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-victoria-bionic-debug-20210622161844

ironic-python-agent-bionic.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-victoria-bionic-debug-20210622161844

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-5.1.0.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-1.0.6-mcp.tgz

provisioning_ansible Updated

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-74-8ab0bf0.tgz

target ubuntu system

https://binary.mirantis.com/bm/bin/efi/ubuntu/tgz-bionic-20210622161844

Docker images

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-bionic-20210623143347

dnsmasq

mirantis.azurecr.io/general/dnsmasq:focal-20210617094827

httpd

mirantis.azurecr.io/lcm/nginx:1.18.0

ironic Updated

mirantis.azurecr.io/openstack/ironic:victoria-bionic-20210719060025

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:victoria-bionic-20210719060025

ironic-operator Updated

mirantis.azurecr.io/bm/ironic-operator:base-bionic-20210726193746

ironic-prometheus-exporter

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20210608113804

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-bionic-20210729185610

mariadb

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20210617085111

syslog-ng

mirantis.azurecr.io/bm/syslog-ng:base-bionic-20210617094817


Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.24.6.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.24.6.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.24.6.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.24.6.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.24.6.tgz

azure-credentials-controller New

https://binary.mirantis.com/core/helm/azure-credentials-controller-1.24.6.tgz

azure-provider New

https://binary.mirantis.com/core/helm/azure-provider-1.24.6.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.24.6.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.24.6.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.24.6.tgz

equinix-credentials-controller

https://binary.mirantis.com/core/helm/equinix-credentials-controller-1.24.6.tgz

equinix-provider

https://binary.mirantis.com/core/helm/equinix-provider-1.24.6.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.24.6.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.24.6.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.24.6.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.24.7.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.24.6.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.24.6.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.24.6.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.24.6.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.24.6.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.24.6.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.24.6.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.24.6.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.24.6.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.24.6.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.24.8

aws-cluster-api-controller Updated

mirantis.azurecr.io/core/aws-cluster-api-controller:1.24.6

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.24.6

azure-cluster-api-controller New

mirantis.azurecr.io/core/azure-cluster-api-controller:1.24.6

azure-credentials-controller New

mirantis.azurecr.io/core/azure-credentials-controller:1.24.6

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.24.6

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.24.6

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.24.6

cluster-api-provider-equinix Updated

mirantis.azurecr.io/core/cluster-api-provider-equinix:1.24.6

equinix-credentials-controller Updated

mirantis.azurecr.io/core/equinix-credentials-controller:1.24.6

frontend Updated

mirantis.azurecr.io/core/frontend:1.24.7

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.24.6

kproxy Updated

mirantis.azurecr.io/lcm/kproxy:1.24.6

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:v0.2.0-404-g7f77e62c

nginx

mirantis.azurecr.io/lcm/nginx:1.18.0

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.24.6

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.24.6

registry

mirantis.azurecr.io/lcm/registry:2.7.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.24.6

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.24.6

squid-proxy Updated

mirantis.azurecr.io/core/squid-proxy:0.0.1-5

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-api-controller:1.24.6

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.24.6


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

iamctl-linux Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.3-linux

iamctl-darwin Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.3-darwin

iamctl-windows Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.3-windows

Helm charts

iam Updated

http://binary.mirantis.com/iam/helm/iam-2.4.2.tgz

iam-proxy Updated

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.6.tgz

keycloak_proxy Updated

http://binary.mirantis.com/core/helm/keycloak_proxy-1.25.0.tgz

Docker images

api

mirantis.azurecr.io/iam/api:0.5.2

auxiliary

mirantis.azurecr.io/iam/auxiliary:0.5.2

kubernetes-entrypoint Updated

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v0.3.1

mariadb

mirantis.azurecr.io/general/mariadb:10.4.16-bionic-20201105025052

keycloak Updated

mirantis.azurecr.io/iam/keycloak:0.5.2

keycloak-gatekeeper Updated

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3

Upgrade managed clusters with StackLight deployed in HA mode

Starting from Container Cloud 2.11.0, the StackLight node label is required for managed clusters deployed in HA mode. The StackLight node label allows running StackLight components on specific worker nodes with corresponding resources.

Before upgrading an existing managed cluster with StackLight deployed in HA mode to the latest Cluster release, add the StackLight node label to at least 3 worker machines. Otherwise, the cluster upgrade will fail.

To add the StackLight node label to a worker machine:

  1. Log in to the Container Cloud web UI.

  2. On the Machines page, click the More action icon in the last column of the required machine field and select Configure machine.

  3. In the window that opens, select the StackLight node label.

Caution

If your managed cluster contains more than 3 worker nodes, select from the following options:

  • If you have a small cluster, add the StackLight label to all worker nodes.

  • If you have a large cluster, identify the exact nodes that run StackLight and add the label to these specific nodes only.

Otherwise, some of the StackLight components may become inaccessible after the cluster update.

To identify the worker machines where StackLight is deployed:

  1. Log in to the Container Cloud web UI.

  2. Download the required cluster kubeconfig:

    1. On the Clusters page, click the More action icon in the last column of the required cluster and select Download Kubeconfig.

    2. Not recommended. Select Offline Token to generate an offline IAM token. Otherwise, for security reasons, the kubeconfig token expires every 30 minutes of the Container Cloud API idle time and you have to download kubeconfig again with a newly generated token.

    3. Click Download.

  3. Export the kubeconfig parameters to your local machine with access to kubectl. For example:

    export KUBECONFIG=~/Downloads/kubeconfig-test-cluster.yml
    
  4. Obtain the list of machines with the StackLight local volumes attached.

    Note

    In the command below, substitute <mgmtKubeconfig> with the path to your management cluster kubeconfig and projectName with the project name where your cluster is located.

    kubectl get persistentvolumes -o=json | \
    jq '.items[]|select(.spec.claimRef.namespace=="stacklight")|.spec.nodeAffinity.required.nodeSelectorTerms[].matchExpressions[].values[]| sub("^kaas-node-"; "")' | \
    sort -u | xargs -I {} kubectl --kubeconfig <mgmtKubeconfig> -n <projectName> get machines -o=jsonpath='{.items[?(@.metadata.annotations.kaas\.mirantis\.com/uid=="{}")].metadata.name}{"\n"}'
    
  5. In the Container Cloud web UI, add the StackLight node label to every machine from the list obtained in the previous step.

2.10.0

The Mirantis Container Cloud GA release 2.10.0:

  • Introduces support for the Cluster release 7.0.0 that is based on the updated versions of Mirantis Container Runtime 20.10.5, and Mirantis Kubernetes Engine 3.4.0 with Kubernetes 1.20.

  • Introduces support for the Cluster release 5.17.0 that is based on Mirantis Kubernetes Engine 3.3.6 with Kubernetes 1.18 and the updated version of Mirantis Container Runtime 20.10.5.

  • Continues supporting the Cluster release 6.16.0 that is based on the Cluster release 5.16.0 and represents Mirantis OpenStack for Kubernetes (MOS) 21.3.

  • Supports deprecated Cluster releases 5.16.0 and 6.14.0 that will become unsupported in one of the following Container Cloud releases.

  • Supports the Cluster release 5.11.0 only for attachment of existing MKE 3.3.4 clusters. For the deployment of new or attachment of existing MKE 3.3.6 clusters, the latest available Cluster release is used.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.10.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.10.0. For the list of enhancements in the Cluster releases 7.0.0, 5.17.0, and 6.16.0 that are supported by the Container Cloud release 2.10.0, see the Cluster releases (managed).


7.x Cluster release series with updated versions of MCR, MKE, and Kubernetes

Implemented the 7.x Cluster release series that contains updated versions of:

  • Mirantis Container Runtime (MCR) 20.10.5

  • Mirantis Kubernetes Engine (MKE) 3.4.0

  • Kubernetes 1.20.1

Support of MKE 3.3.x series and 3.4.0 for cluster attachment

Added support of several Mirantis Kubernetes Engine (MKE) versions of the 3.3.x series and 3.4.0 for attaching or detaching of existing MKE 3.3.3 - 3.3.6 and 3.4.0 clusters as well as updating them to the latest supported version.

This feature allows for visualization of all your MKE clusters details on one management cluster including clusters health, capacity, and usage.

Initial CentOS support for the VMware vSphere provider

Technology Preview

Introduced the initial Technology Preview support of the CentOS 7.9 operating system for the vSphere-based management, regional, and managed clusters.

Note

  • Deployment of a Container Cloud cluster that is based on both RHEL and CentOS operating systems is not supported.

  • To deploy a vSphere-based managed cluster on CentOS with custom or additional mirrors configured in the VM template, the squid-proxy configuration on the management or regional cluster is required. It is done automatically if you use the Container Cloud script for the VM template creation.

RHEL 7.9 support for the VMware vSphere provider

Added support of RHEL 7.9 for the vSphere provider. This operating system is now installed by default on any type of the vSphere-based Container Cloud clusters.

RHEL 7.8 deployment is still possible with allowed access to the rhel-7-server-rpms repository provided by the Red Hat Enterprise Linux Server 7 x86_64. Verify that your RHEL license or activation key meets this requirement.

Guided tour in the Container Cloud web UI

Implemented the guided tour in the Container Cloud web UI to help you get oriented with the multi-cluster multi-cloud Container Cloud platform. This brief guided tour will step you through the key features of Container Cloud that can be performed using the Container Cloud web UI.

Removal of IAM and Keycloak IPs configuration for the vSphere provider

Removed the following Keycloak and IAM services variables that were used during a vSphere-based management cluster bootstrap for the MetalLB configuration:

  • KEYCLOAK_FLOATING_IP

  • IAM_FLOATING_IP

Now, these IPs are automatically generated in the MetalLB range for certificates creation.

Learn more

Deprecation notes

Command for creation of Keycloak users

Implemented the container-cloud bootstrap user add command that allows creating Keycloak users with specific permissions to access the Container Cloud web UI and manage the Container Cloud clusters.

For security reasons, removed the default password password for Keycloak that was generated during a management cluster bootstrap to access the Container Cloud web UI.

Documentation enhancements for IAM

On top of continuous improvements delivered to the existing Container Cloud guides, added documentation about the Container Cloud user roles management through the Keycloak Admin Console. The section outlines the IAM roles and scopes structure in Container Cloud as well as role assignment to users using the Keycloak Admin Console.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.10.0 along with the Cluster releases 7.0.0 and 5.17.0.

For more issues addressed for the Cluster release 6.16.0, see also addressed issues 2.8.0 and 2.9.0.

  • [8013][AWS] Fixed the issue with managed clusters deployment, that requires persistent volumes (PVs), failing with pods being stuck in the Pending state and having the pod has unbound immediate PersistentVolumeClaims and node(s) had volume node affinity conflict errors.

    Note

    The issue affects only the MKE deployments with Kubernetes 1.18 and is fixed for MKE 3.4.x with Kubernetes 1.20 that is available since the Cluster release 7.0.0.

  • [14981] [Equinix Metal] Fixed the issue with a manager machine deployment failing if the cluster contained at least one manager machine that was stuck in the Provisioning state due to the capacity limits in the selected Equinix Metal data center.

  • [13402] [LCM] Fixed the issue with the existing clusters failing with the no space left on device error due to an excessive amount of core dumps produced by applications that fail frequently.

  • [14125] [LCM] Fixed the issue with managed clusters deployed or updated on a regional cluster of another provider type displaying inaccurate Nodes readiness live status in the Container Cloud web UI.

  • [14040][StackLight] Fixed the issue with the Tiller container of the stacklight-helm-controller pods switching to CrashLoopBackOff and then being OOMKilled. Limited the releases number in history to 3 to prevent RAM overconsumption by Tiller.

  • [14152] [Upgrade] Fixed the issue with managed cluster release upgrade failing and the DNS names of the Kubernetes services on the affected pod not being resolved due to DNS issues on pods with host networking.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.10.0 including the Cluster releases 7.0.0, 6.16.0, and 5.16.0.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


AWS
[8013] Managed cluster deployment requiring PVs may fail

Fixed in the Cluster release 7.0.0

Note

The issue below affects only the Kubernetes 1.18 deployments. Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

On a management cluster with multiple AWS-based managed clusters, some clusters fail to complete the deployments that require persistent volumes (PVs), for example, Elasticsearch. Some of the affected pods get stuck in the Pending state with the pod has unbound immediate PersistentVolumeClaims and node(s) had volume node affinity conflict errors.

Warning

The workaround below applies to HA deployments where data can be rebuilt from replicas. If you have a non-HA deployment, back up any existing data before proceeding, since all data will be lost while applying the workaround.

Workaround:

  1. Obtain the persistent volume claims related to the storage mounts of the affected pods:

    kubectl get pod/<pod_name1> pod/<pod_name2> \
    -o jsonpath='{.spec.volumes[?(@.persistentVolumeClaim)].persistentVolumeClaim.claimName}'
    

    Note

    In the command above and in the subsequent steps, substitute the parameters enclosed in angle brackets with the corresponding values.

  2. Delete the affected Pods and PersistentVolumeClaims to reschedule them: For example, for StackLight:

    kubectl -n stacklight delete \
    
      pod/<pod_name1> pod/<pod_name2> ...
      pvc/<pvc_name2> pvc/<pvc_name2> ...
    


Equinix Metal
[16718] Equinix Metal provider fails to create machines with SSH keys error

Fixed in 2.12.0

If an Equinix Metal based cluster is being deployed in an Equinix Metal project with no SSH keys, the Equinix Metal provider fails to create machines with the following error:

Failed to create machine "kaas-mgmt-controlplane-0"...
failed to create device: POST https://api.equinix.com/metal/v1/projects/...
<deviceID> must have at least one SSH key or explicitly send no_ssh_keys option

Workaround:

  1. Create a new SSH key.

  2. Log in to the Equinix Metal console.

  3. In Project Settings, click Project SSH Keys.

  4. Click Add New Key and add details of the newly created SSH key.

  5. Click Add.

  6. Restart the cluster deployment.


Bare metal
[17118] Failure to add a new machine to cluster

Fixed in 2.12.0

Adding a new machine to a baremetal-based managed cluster may fail after the baremetal-based management cluster upgrade. The issue occurs because the PXE boot is not working for the new node. In this case, file /volume/tftpboot/ipxe.efi not found logs appear on dnsmasq-tftp.

Workaround:

  1. Log in to a local machine where your management cluster kubeconfig is located and where kubectl is installed.

  2. Scale the Ironic deployment down to 0 replicas.

    kubectl -n kaas scale deployments/ironic --replicas=0
    
  3. Scale the Ironic deployment up to 1 replica:

    kubectl -n kaas scale deployments/ironic --replicas=1
    

[7655] Wrong status for an incorrectly configured L2 template

Fixed in 2.11.0

If an L2 template is configured incorrectly, a bare metal cluster is deployed successfully but with the runtime errors in the IpamHost object.

Workaround:

If you suspect that the machine is not working properly because of incorrect network configuration, verify the status of the corresponding IpamHost object. Inspect the l2RenderResult and ipAllocationResult object fields for error messages.



OpenStack
[10424] Regional cluster cleanup fails by timeout

An OpenStack-based regional cluster cleanup fails with the timeout error.

Workaround:

  1. Wait for the Cluster object to be deleted in the bootstrap cluster:

    kubectl --kubeconfig <(./bin/kind get kubeconfig --name clusterapi) get cluster
    

    The system output must be empty.

  2. Remove the bootstrap cluster manually:

    ./bin/kind delete cluster --name clusterapi
    


vSphere
[15698] VIP is assigned to each manager node instead of a single node

Fixed in 2.11.0

A load balancer virtual IP address (VIP) is assigned to each manager node on any type of the vSphere-based cluster. The issue occurs because the Keepalived instances cannot set up a cluster due to the blocked vrrp protocol traffic in the firewall configuration on the Container Cloud nodes.

Note

Before applying the workaround below, verify that the dedicated vSphere network does not have any other virtual machines with the keepalived instance running with the same vrouter_id.

You can verify the vrouter_id value of the cluster in /etc/keepalived/keepalived.conf on the manager nodes.

Workaround

Update the firewalld configuration on each manager node of the affected cluster to allow the vrrp protocol traffic between the nodes:

  1. SSH to any manager node using mcc-user.

  2. Apply the firewalld configuration:

    firewall-cmd --add-rich-rule='rule protocol value="vrrp" accept' --permanent
    firewall-cmd --reload
    
  3. Apply the procedure to the remaining manager nodes of the cluster.


[14458] Failure to create a container for pod: cannot allocate memory

Fixed in 2.9.0 for new clusters

Newly created pods may fail to run and have the CrashLoopBackOff status on long-living Container Cloud clusters deployed on RHEL 7.8 using the VMware vSphere provider. The following is an example output of the kubectl describe pod <pod-name> -n <projectName> command:

State:        Waiting
Reason:       CrashLoopBackOff
Last State:   Terminated
Reason:       ContainerCannotRun
Message:      OCI runtime create failed: container_linux.go:349:
              starting container process caused "process_linux.go:297:
              applying cgroup configuration for process caused
              "mkdir /sys/fs/cgroup/memory/kubepods/burstable/<pod-id>/<container-id>>:
              cannot allocate memory": unknown

The issue occurs due to the Kubernetes and Docker community issues.

According to the RedHat solution, the workaround is to disable the kernel memory accounting feature by appending cgroup.memory=nokmem to the kernel command line.

Note

The workaround below applies to the existing clusters only. The issue is resolved for new Container Cloud 2.9.0 deployments since the workaround below automatically applies to the VM template built during the vSphere-based management cluster bootstrap.

Apply the following workaround on each machine of the affected cluster.

Workaround

  1. SSH to any machine of the affected cluster using mcc-user and the SSH key provided during the cluster creation to proceed as the root user.

  2. In /etc/default/grub, set cgroup.memory=nokmem for GRUB_CMDLINE_LINUX.

  3. Update kernel:

    yum install kernel kernel-headers kernel-tools kernel-tools-libs kexec-tools
    
  4. Update the grub configuration:

    grub2-mkconfig -o /boot/grub2/grub.cfg
    
  5. Reboot the machine.

  6. Wait for the machine to become available.

  7. Wait for 5 minutes for Docker and Kubernetes services to start.

  8. Verify that the machine is Ready:

    docker node ls
    kubectl get nodes
    
  9. Repeat the steps above on the remaining machines of the affected cluster.


[14080] Node leaves the cluster after IP address change

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

A vSphere-based management cluster bootstrap fails due to a node leaving the cluster after an accidental IP address change.

The issue may affect a vSphere-based cluster only when IPAM is not enabled and IP addresses assignment to the vSphere virtual machines is done by a DHCP server present in the vSphere network.

By default, a DHCP server keeps lease of the IP address for 30 minutes. Usually, a VM dhclient prolongs such lease by frequent DHCP requests to the server before the lease period ends. The DHCP prolongation request period is always less than the default lease time on the DHCP server, so prolongation usually works. But in case of network issues, for example, when dhclient from the VM cannot reach the DHCP server, or the VM is being slowly powered on for more than the lease time, such VM may lose its assigned IP address. As a result, it obtains a new IP address.

Container Cloud does not support network reconfiguration after the IP of the VM has been changed. Therefore, such issue may lead to a VM leaving the cluster.

Symptoms:

  • One of the nodes is in the NodeNotReady or down state:

    kubectl get nodes -o wide
    docker node ls
    
  • The UCP Swarm manager logs on the healthy manager node contain the following example error:

    docker logs -f ucp-swarm-manager
    
    level=debug msg="Engine refresh failed" id="<docker node ID>|<node IP>: 12376"
    
  • If the affected node is manager:

    • The output of the docker info command contains the following example error:

      Error: rpc error: code = Unknown desc = The swarm does not have a leader. \
      It's possible that too few managers are online. \
      Make sure more than half of the managers are online.
      
    • The UCP controller logs contain the following example error:

      docker logs -f ucp-controller
      
      "warning","msg":"Node State Active check error: \
      Swarm Mode Manager health check error: \
      info: Cannot connect to the Docker daemon at tcp://<node IP>:12376. \
      Is the docker daemon running?
      
  • On the affected node, the IP address on the first interface eth0 does not match the IP address configured in Docker. Verify the Node Address field in the output of the docker info command.

  • The following lines are present in /var/log/messages:

    dhclient[<pid>]: bound to <node IP> -- renewal in 1530 seconds
    

    If there are several lines where the IP is different, the node is affected.

Workaround:

Select from the following options:

  • Bind IP addresses for all machines to their MAC addresses on the DHCP server for the dedicated vSphere network. In this case, VMs receive only specified IP addresses that never change.

  • Remove the Container Cloud node IPs from the IP range on the DHCP server for the dedicated vSphere network and configure the first interface eth0 on VMs with a static IP address.

  • If a managed cluster is affected, redeploy it with IPAM enabled for new machines to be created and IPs to be assigned properly.


LCM
[16146] Stuck kubelet on the Cluster release 5.x.x series

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

Occasionally, kubelet may get stuck on the Cluster release 5.x.x series with different errors in the ucp-kubelet containers leading to the nodes failures. The following error occurs every time when accessing the Kubernetes API server:

an error on the server ("") has prevented the request from succeeding

As a workaround, restart ucp-kubelet on the failed node:

ctr -n com.docker.ucp snapshot rm ucp-kubelet
docker rm -f ucp-kubelet

[8367] Adding of a new manager node to a managed cluster hangs on Deploy stage

Fixed in 2.12.0

Adding of a new manager node to a managed cluster may hang due to issues with joining etcd from a new node to the existing etcd cluster. The new manager node hangs in the Deploy stage.

Symptoms:

  • The Ansible run tries executing the Wait for Docker UCP to be accessible step and fails with the following error message:

    Status code was -1 and not [200]: Request failed: <urlopen error [Errno 111] Connection refused>
    
  • The etcd logs on the leader etcd node contain the following example error message occurring every 1-2 minutes:

    2021-06-10 03:21:53.196677 W | etcdserver: not healthy for reconfigure,
    rejecting member add {ID:22bb1d4275f1c5b0 RaftAttributes:{PeerURLs:[https://<new manager IP>:12380]
    IsLearner:false} Attributes:{Name: ClientURLs:[]}}
    
    • To determine the etcd leader, run on any manager node:

      docker exec -it ucp-kv sh
      # From the inside of the container:
      ETCDCTL_API=3 etcdctl -w table --endpoints=https://<1st manager IP>:12379,https://<2nd manager IP>:12379,https://<3rd manager IP>:12379 endpoint status
      
    • To verify logs on the leader node:

      docker logs ucp-kv
      

Root cause:

In case of an unlucky network partition, the leader may lose quorum and members are not able to perform the election. For more details, see Official etcd documentation: Learning, figure 5.

Workaround:

  1. Restart etcd on the leader node:

    docker rm -f ucp-kv
    
  2. Wait several minutes until the etcd cluster starts and reconciles.

    The deployment of the new manager node will proceed and it will join the etcd cluster. After that, other MKE components will be configured and the node deployment will be finished successfully.


[13303] Managed cluster update fails with the Network is unreachable error

Fixed in 2.11

A managed cluster update from the Cluster release 6.12.0 to 6.14.0 fails with worker nodes being stuck in the Deploy state with the Network is unreachable error.

Workaround:

  1. Verify the state of the loopback network interface:

    ip l show lo
    

    If the interface is not in the UNKNOWN or UP state, enable it manually:

    ip l set lo up
    

    If the interface is in the UNKNOWN or UP state, assess the cluster logs to identify the failure root cause.

  2. Repeat the cluster update procedure.


[13845] Cluster update fails during the LCM Agent upgrade with x509 error

Fixed in 2.11.0

During update of a managed cluster from the Cluster releases 6.12.0 to 6.14.0, the LCM Agent upgrade fails with the following error in logs:

lcmAgentUpgradeStatus:
    error: 'failed to download agent binary: Get https://<mcc-cache-address>/bin/lcm/bin/lcm-agent/v0.2.0-289-gd7e9fa9c/lcm-agent:
      x509: certificate signed by unknown authority'

Only clusters initially deployed using Container Cloud 2.4.0 or earlier are affected.

As a workaround, restart lcm-agent using the service lcm-agent-* restart command on the affected nodes.


[6066] Helm releases get stuck in FAILED or UNKNOWN state

Note

The issue affects only Helm v2 releases and is addressed for Helm v3. Starting from Container Cloud 2.19.0, all Helm releases are switched to v3.

During a management, regional, or managed cluster deployment, Helm releases may get stuck in the FAILED or UNKNOWN state although the corresponding machines statuses are Ready in the Container Cloud web UI. For example, if the StackLight Helm release fails, the links to its endpoints are grayed out in the web UI. In the cluster status, providerStatus.helm.ready and providerStatus.helm.releaseStatuses.<releaseName>.success are false.

HelmBundle cannot recover from such states and requires manual actions. The workaround below describes the recovery steps for the stacklight release that got stuck during a cluster deployment. Use this procedure as an example for other Helm releases as required.

Workaround:

  1. Verify the failed release has the UNKNOWN or FAILED status in the HelmBundle object:

    kubectl --kubeconfig <regionalClusterKubeconfigPath> get helmbundle <clusterName> -n <clusterProjectName> -o=jsonpath={.status.releaseStatuses.stacklight}
    
    In the command above and in the steps below, replace the parameters
    enclosed in angle brackets with the corresponding values of your cluster.
    

    Example of system response:

    stacklight:
    attempt: 2
    chart: ""
    finishedAt: "2021-02-05T09:41:05Z"
    hash: e314df5061bd238ac5f060effdb55e5b47948a99460c02c2211ba7cb9aadd623
    message: '[{"occurrence":1,"lastOccurrenceDate":"2021-02-05 09:41:05","content":"error
      updating the release: rpc error: code = Unknown desc = customresourcedefinitions.apiextensions.k8s.io
      \"helmbundles.lcm.mirantis.com\" already exists"}]'
    notes: ""
    status: UNKNOWN
    success: false
    version: 0.1.2-mcp-398
    
  2. Log in to the helm-controller pod console:

    kubectl --kubeconfig <affectedClusterKubeconfigPath> exec -n kube-system -it helm-controller-0 sh -c tiller
    
  3. Download the Helm v3 binary. For details, see official Helm documentation.

  4. Remove the failed release:

    helm delete <failed-release-name>
    

    For example:

    helm delete stacklight
    

    Once done, the release triggers for redeployment.



IAM
[13385] MariaDB pods fail to start after SST sync

Fixed in 2.12.0

The MariaDB pods fail to start after MariaDB blocks itself during the State Snapshot Transfers sync.

Workaround:

  1. Verify the failed pod readiness:

    kubectl describe pod -n kaas <failedMariadbPodName>
    

    If the readiness probe failed with the WSREP not synced message, proceed to the next step. Otherwise, assess the MariaDB pod logs to identify the failure root cause.

  2. Obtain the MariaDB admin password:

    kubectl get secret -n kaas mariadb-dbadmin-password -o jsonpath='{.data.MYSQL_DBADMIN_PASSWORD}' | base64 -d ; echo
    
  3. Verify that wsrep_local_state_comment is Donor or Desynced:

    kubectl exec -it -n kaas <failedMariadbPodName> -- mysql -uroot -p<mariadbAdminPassword> -e "SHOW status LIKE \"wsrep_local_state_comment\";"
    
  4. Restart the failed pod:

    kubectl delete pod -n kaas <failedMariadbPodName>
    


StackLight
[16843] Inability to override default route matchers for Salesforce notifier

Fixed in 2.12.0

It may be impossible to override the default route matchers for Salesforce notifier.

Note

After applying the workaround, you may notice the following warning message. It is expected and does not affect configuration rendering:

Warning: Merging destination map for chart 'stacklight'. Overwriting table
item 'match', with non table value: []

Workaround:

  1. Open the StackLight configuration manifest as described in StackLight configuration procedure.

  2. In alertmanagerSimpleConfig.salesForce, specify the following configuration:

    alertmanagerSimpleConfig:
      salesForce:
        route:
          match: []
          match_re:
            your_matcher_key1: your_matcher_value1
            your_matcher_key2: your_matcher_value2
            ...
    

[17771] Watchdog alert missing in Salesforce route

Fixed in 2.13.0

The Watchdog alert is not routed to Salesforce by default.

Note

After applying the workaround, you may notice the following warning message. It is expected and does not affect configuration rendering:

Warning: Merging destination map for chart 'stacklight'. Overwriting table
item 'match', with non table value: []

Workaround:

  1. Open the StackLight configuration manifest as described in StackLight configuration procedure.

  2. In alertmanagerSimpleConfig.salesForce, specify the following configuration:

    alertmanagerSimpleConfig:
      salesForce:
        route:
          match: []
          match_re:
            severity: "informational|critical"
          matchers:
          - severity=~"informational|critical"
    


Storage
[10050] Ceph OSD pod is in the CrashLoopBackOff state after disk replacement

Fixed in 2.11.0

If you use a custom BareMetalHostProfile, after disk replacement on a Ceph OSD, the Ceph OSD pod switches to the CrashLoopBackOff state due to the Ceph OSD authorization key failing to be created properly.

Workaround:

  1. Export kubeconfig of your managed cluster. For example:

    export KUBECONFIG=~/Downloads/kubeconfig-test-cluster.yml
    
  2. Log in to the ceph-tools pod:

    kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
    
  3. Delete the authorization key for the failed Ceph OSD:

    ceph auth del osd.<ID>
    
  4. SSH to the node on which the Ceph OSD cannot be created.

  5. Clean up the disk that will be a base for the failed Ceph OSD. For details, see official Rook documentation.

    Note

    Ignore failures of the sgdisk --zap-all $DISK and blkdiscard $DISK commands if any.

  6. On the managed cluster, restart Rook Operator:

    kubectl -n rook-ceph delete pod -l app=rook-ceph-operator
    


Bootstrap
[16873] Bootstrap fails with ‘failed to establish connection with tiller’ error

Fixed in 2.12.0

If the latest Ubuntu 18.04 image, for example, with kernel 4.15.0-153-generic, is installed on the bootstrap node, a management cluster bootstrap fails during the setup of the Kubernetes cluster by kind.

The issue occurs since the kind version 0.9.0 delivered with the bootstrap script is not compatible with the latest Ubuntu 18.04 image that requires kind version 0.11.1.

To verify that the bootstrap node is affected by the issue:

  1. In the bootstrap script stdout, verify the connection to Tiller.

    Example of system response extract on an affected bootstrap node:

    clusterdeployer.go:164] Initialize Tiller in bootstrap cluster.
    bootstrap_create.go:64] unable to initialize Tiller in bootstrap cluster: \
    failed to establish connection with tiller
    
  2. In the bootstrap script stdout, identify the step after which the bootstrap process fails.

    Example of system response extract on an affected bootstrap node:

    clusterdeployer.go:128] Connecting to bootstrap cluster
    
  3. In the kind cluster, verify the kube-proxy service readiness:

    ./bin/kind get kubeconfig --name clusterapi > /tmp/kind_kubeconfig.yaml
    
    ./bin/kubectl --kubeconfig /tmp/kind_kubeconfig.yaml get po -n kube-system | grep kube-proxy
    
    ./bin/kubectl --kubeconfig /tmp/kind_kubeconfig.yaml-n kube-system logs kube-proxy-<podPostfixID>
    

    Example of the kube-proxy service stdout extract on an affected bootstrap node:

    I0831 11:56:16.139300  1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
    F0831 11:56:16.139313  1 server.go:497] open /proc/sys/net/netfilter/nf_conntrack_max: permission denied
    

If the verification steps below are positive, proceed with the workaround below.

Workaround:

  1. Clean up the bootstrap cluster:

    ./bin/kind delete cluster --name clusterapi
    
  2. Upgrade the kind binary to version 0.11.1:

    curl -L https://github.com/kubernetes-sigs/kind/releases/download/v0.11.1/kind-linux-amd64 -o bin/kind
    
    chmod a+x bin/kind
    
  3. Restart the bootstrap script:

    ./bootstrap.sh all
    


Upgrade
[16233] Bare metal pods fail during upgrade due to Ceph not unmounting RBD

Fixed in 2.11.0

A baremetal-based management cluster upgrade can fail with stuck ironic and dnsmasq pods. The issue may occur due to the Ceph pre-upgraded persistent volumes being unmapped incorrectly. As a result, the RBD volumes mounts on nodes are without any real RBD volumes.

Symptoms:

  1. The ironic and dnsmasq deployments fail:

    kubectl -n kaas get deploy
    

    Example of system response:

    NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
    ironic                            0/1     0            0           6d10h
    dnsmasq                           0/1     0            0           6d10h
    
  2. The bare metal mariadb and httpd statefulSets fail:

    kubectl -n kaas get statefulset
    

    Example output:

    NAME             READY   AGE
    httpd            0/1     6d10h
    mariadb          0/1     6d10h
    
  3. On the failed deployments pods, the ll /volume command hangs or outputs the input/output error:

    1. Enter any pod of the failed deployment:

      kubectl -n kaas exec -it <podName> -- bash
      

      Replace <podName> with the affected pod name. For example, httpd-0.

    2. Obtain the list of files in the /volume directory:

      ll /volume
      

      Example of system response:

      ls: reading directory '.': Input/output error
      

      If the above command gets stuck or outputs the Input/output error error, the issue relates to the ceph-csi unmounted RBD devices.

Workaround:

  1. Identify the names of nodes with the affected pods:

    kubectl -n kaas get pod <podName> -o jsonpath='{.spec.nodeName}'
    

    Replace <podName> with the affected pod name.

  2. Identify which csi-rbdplugin pod is assigned to which node:

    kubectl -n rook-ceph get pod -l app=csi-rbdplugin -o jsonpath='{range .items[*]}{.metadata.name}{" "}{.spec.nodeName}{"\n"}'
    
  3. Enter any affected csi-rbdplugin pod.

    kubectl -n rook-ceph exec -it <csiPodName> -c csi-rbdplugin -- bash
    
  4. Identify the mapped device classes on this pod:

    rbd device list
    
  5. Identify which devices are mounted on this pod:

    mount | grep rbd
    
  6. Unmount all devices that are not included into the rbd device list command output:

    umount <rbdDeviceName>
    

    Replace <rbdDeviceName> with a mounted RBD device name that is not included into the rbd device list output. For example, /dev/rbd0.

  7. Exit the csi-rbdplugin pod:

    exit
    
  8. Repeat the steps above for the remaining affected csi-rbdplugin pods on every affected node.

  9. Once all nonexistent mounts are unmounted on all nodes, restart the stuck deployments:

    kubectl -n kaas get deploy
    kubectl -n kaas scale deploy <deploymentName> --replicas 0
    kubectl -n kaas scale deploy <deploymentName> --replicas <replicasNumber>
    
    • <deploymentName> is a stuck bare metal deployment name, for example, ironic

    • <replicasNumber> is the original number of replicas for the deployment that you can obtain using the get deploy command

  10. Restart the failed bare metal statefulSets:

    kubectl -n kaas get statefulset
    kubectl -n kaas scale statefulset <statefulSetName> --replicas 0
    kubectl -n kaas scale statefulset <statefulSetName> --replicas <replicasNumber>
    
    • <statefulSetName> is a failed bare metal statefulSet name, for example, mariadb

    • <replicasNumber> is the original number of replicas for the statefulSet that you can obtain using the get statefulset command


[16379,23865] Cluster update fails with the FailedMount warning

Fixed in 2.19.0

An Equinix-based management or managed cluster fails to update with the FailedAttachVolume and FailedMount warnings.

Workaround:

  1. Verify that the description of the pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    
    • <affectedProjectName> is the Container Cloud project name where the pods failed to run

    • <affectedPodName> is a pod name that failed to run in this project

    In the pod description, identify the node name where the pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the pod that fails to init to 0 replicas.

  4. On every csi-rbdplugin pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state is Running.


[9899] Helm releases get stuck in PENDING_UPGRADE during cluster update

Fixed in 2.14.0

Helm releases may get stuck in the PENDING_UPGRADE status during a management or managed cluster upgrade. The HelmBundle Controller cannot recover from this state and requires manual actions. The workaround below describes the recovery process for the openstack-operator release that stuck during a managed cluster update. Use it as an example for other Helm releases as required.

Workaround:

  1. Log in to the helm-controller pod console:

    kubectl exec -n kube-system -it helm-controller-0 sh -c tiller
    
  2. Identify the release that stuck in the PENDING_UPGRADE status. For example:

    ./helm --host=localhost:44134 history openstack-operator
    

    Example of system response:

    REVISION  UPDATED                   STATUS           CHART                      DESCRIPTION
    1         Tue Dec 15 12:30:41 2020  SUPERSEDED       openstack-operator-0.3.9   Install complete
    2         Tue Dec 15 12:32:05 2020  SUPERSEDED       openstack-operator-0.3.9   Upgrade complete
    3         Tue Dec 15 16:24:47 2020  PENDING_UPGRADE  openstack-operator-0.3.18  Preparing upgrade
    
  3. Roll back the failed release to the previous revision:

    1. Download the Helm v3 binary. For details, see official Helm documentation.

    2. Roll back the failed release:

      helm rollback <failed-release-name>
      

      For example:

      helm rollback openstack-operator 2
      

    Once done, the release will be reconciled.


[15766] Cluster upgrade failure

Fixed in 2.11.0

Upgrade of a Container Cloud management or regional cluster from version 2.9.0 to 2.10.0 and managed cluster from 5.16.0 to 5.17.0 may fail with the following error message for the patroni-12-0, patroni-12-1 or patroni-12-2 pod.

error when evicting pods/"patroni-12-2" -n "stacklight" (will retry after 5s):
Cannot evict pod as it would violate the pod's disruption budget.

As a workaround, reinitialize the Patroni pod that got stuck:

kubectl -n stacklight exec -ti -c patroni $(kubectl -n stacklight \
get ep/patroni-12 -o jsonpath='{.metadata.annotations.leader}') -- \
patronictl reinit patroni-12 <POD_NAME> --force --wait

Substitute <POD_NAME> with the name of the Patroni pod from the error message. For example:

kubectl -n stacklight exec -ti -c patroni $(kubectl -n stacklight \
get ep/patroni-12 -o jsonpath='{.metadata.annotations.leader}') -- \
patronictl reinit patroni-12 patroni-12-2

If the command above fails, reinitialize the affected pod with a new volume by deleting the pod itself and the associated PersistentVolumeClaim (PVC):

  1. Obtain the PVC of the affected pod:

    kubectl -n stacklight get "pod/<POD_NAME>" -o jsonpath='{.spec.volumes[?(@.name=="storage-volume")].persistentVolumeClaim.claimName}'
    
  2. Delete the affected pod and its PVC:

    kubectl -n stacklight delete "pod/<POD_NAME>" "pvc/<POD_PVC>"
    sleep 3  # wait for StatefulSet to reschedule the pod, but miss dependent PVC creation
    kubectl -n stacklight delete "pod/<POD_NAME>"
    
[16141] Alertmanager pod gets stuck in CrashLoopBackOff during upgrade

Fixed in 2.11.0

An Alertmanager pod may get stuck in the CrashLoopBackOff state during upgrade of a management, regional, or managed cluster and thus cause upgrade failure with the Loading configuration file failed error message in logs.

Workaround:

  1. Delete the Alertmanager pod that is stuck in the CrashLoopBackOff state. For example:

    kubectl delete pod/prometheus-alertmanager-1 -n stacklight
    
  2. Wait for several minutes and verify that Alertmanager and its pods are up and running:

    kubectl get all -n stacklight -l app=prometheus,component=alertmanager
    


Container Cloud web UI
[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.10.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.23.2

aws-credentials-controller

1.23.2

Bare metal

baremetal-operator Updated

5.0.5

baremetal-public-api Updated

5.0.4

baremetal-provider Updated

1.23.2

httpd

1.18.0

ironic Updated

victoria-bionic-20210615143607

ironic-operator Updated

base-bionic-20210622124940

kaas-ipam Updated

base-bionic-20210617150226

local-volume-provisioner

1.0.6-mcp

mariadb Updated

10.4.17-bionic-20210617085111

IAM

iam Updated

2.4.2

iam-controller Updated

1.23.2

keycloak

12.0.0

Container Cloud Updated

admission-controller

1.23.3

byo-credentials-controller

1.23.2

byo-provider

1.23.2

kaas-public-api

1.23.2

kaas-exporter

1.23.2

kaas-ui

1.23.4

lcm-controller

0.2.0-372-g7e042f4d

mcc-cache

1.23.2

proxy-controller

1.23.2

release-controller

1.23.2

rhellicense-controller

1.23.2

squid-proxy

0.0.1-5

Equinix Metal Updated

equinix-provider

1.23.2

equinix-credentials-controller

1.23.2

OpenStack Updated

openstack-provider

1.23.2

os-credentials-controller

1.23.2

VMware vSphere Updated

vsphere-provider

1.23.2

vsphere-credentials-controller

1.23.2

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.10.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-5.0.5.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-5.0.4.tgz

ironic-python-agent-bionic.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-victoria-bionic-debug-20210622161844

ironic-python-agent-bionic.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-victoria-bionic-debug-20210622161844

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-5.0.4.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-1.0.6-mcp.tgz

provisioning_ansible Updated

https://binary.mirantis.com/bm/bin/ansible/provisioning_ansible-0.1.1-72-3120eae.tgz

target ubuntu system Updated

https://binary.mirantis.com/bm/bin/efi/ubuntu/tgz-bionic-20210622161844

Docker images

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-bionic-20210623143347

dnsmasq Updated

mirantis.azurecr.io/general/dnsmasq:focal-20210617094827

httpd

mirantis.azurecr.io/lcm/nginx:1.18.0

ironic Updated

mirantis.azurecr.io/openstack/ironic:victoria-bionic-20210615143607

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:victoria-bionic-20210615143607

ironic-operator Updated

mirantis.azurecr.io/bm/ironic-operator:base-bionic-20210622124940

ironic-prometheus-exporter Updated

mirantis.azurecr.io/stacklight/ironic-prometheus-exporter:0.1-20210608113804

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-bionic-20210617150226

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20210617085111

syslog-ng Updated

mirantis.azurecr.io/bm/syslog-ng:base-bionic-20210617094817


Core artifacts

Artifact

Component

Paths

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.23.2.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.23.2.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.23.2.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.23.2.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.23.2.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.23.2.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.23.2.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.23.2.tgz

equinix-credentials-controller

https://binary.mirantis.com/core/helm/equinix-credentials-controller-1.23.2.tgz

equinix-provider

https://binary.mirantis.com/core/helm/equinix-provider-1.23.2.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.23.2.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.23.2.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.23.2.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.23.2.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.23.2.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.23.2.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.23.2.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.23.2.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.23.2.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.23.2.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.23.2.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.23.2.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.23.2.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.23.2.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.23.3

aws-cluster-api-controller Updated

mirantis.azurecr.io/core/aws-cluster-api-controller:1.23.2

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.23.2

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.23.2

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.23.2

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.23.2

cluster-api-provider-equinix Updated

mirantis.azurecr.io/core/cluster-api-provider-equinix:1.23.2

equinix-credentials-controller Updated

mirantis.azurecr.io/core/equinix-credentials-controller:1.23.2

frontend Updated

mirantis.azurecr.io/core/frontend:1.23.4

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.23.2

kproxy Updated

mirantis.azurecr.io/lcm/kproxy:1.23.2

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:v0.2.0-372-g7e042f4d

nginx

mirantis.azurecr.io/lcm/nginx:1.18.0

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.23.2

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.23.2

registry

mirantis.azurecr.io/lcm/registry:2.7.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.23.2

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.23.2

squid-proxy Updated

mirantis.azurecr.io/core/squid-proxy:0.0.1-5

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-api-controller:1.23.2

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.23.2


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

iamctl-linux Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.2-linux

iamctl-darwin Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.2-darwin

iamctl-windows Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.2-windows

Helm charts

iam Updated

http://binary.mirantis.com/iam/helm/iam-2.4.2.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

keycloak-proxy Updated

http://binary.mirantis.com/core/helm/keycloak_proxy-1.23.2.tgz

Docker images

api Updated

mirantis.azurecr.io/iam/api:0.5.2

auxiliary Updated

mirantis.azurecr.io/iam/auxiliary:0.5.2

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.0-20200311160233

mariadb

mirantis.azurecr.io/general/mariadb:10.4.16-bionic-20201105025052

keycloak

mirantis.azurecr.io/iam/keycloak:0.4.0

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:6.0.1

2.9.0

The Mirantis Container Cloud GA release 2.9.0:

  • Introduces support for the Cluster release 5.16.0 that is based on Kubernetes 1.18, Mirantis Container Runtime 19.03.14, and Mirantis Kubernetes Engine 3.3.6.

  • Introduces support for the Cluster release 6.16.0 that is based on the Cluster release 5.16.0 and represents Mirantis OpenStack for Kubernetes (MOS) 21.3.

  • Supports deprecated Cluster releases 5.15.0 and 6.14.0 that will become unsupported in one of the following Container Cloud releases.

  • Supports the Cluster release 5.11.0 only for attachment of existing MKE 3.3.4 clusters. For the deployment of new or attachment of existing MKE 3.3.6 clusters, the latest available Cluster release is used.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.9.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.9.0. For the list of enhancements in the Cluster release 5.16.0 and Cluster release 6.16.0 that are supported by the Container Cloud release 2.9.0, see the 5.16.0 and 6.16.0 sections.


Container Cloud clusters based on Equinix Metal

Introduced support for the Equinix Metal cloud provider. Equinix Metal integrates a fully automated bare metal infrastructure at software speed.

Now, you can deploy managed clusters that are based on the Equinix Metal management or regional clusters or on top of the AWS-based management cluster.

Using the Equinix Metal management cluster, you can also deploy additional regional clusters that are based the OpenStack, AWS, vSphere, or Equinix Metal cloud providers to deploy and operate managed clusters of different provider types or configurations from a single Container Cloud management plane.

The Equinix Metal based managed clusters also include a Ceph cluster that can be configured either automatically or manually before or after the cluster deployment.

Integration of Container Cloud to Lens

Implemented the Container Cloud integration to Lens. Using the Container Cloud web UI and the Lens extension, you can now add any type of Container Cloud clusters to Lens for further inspection and monitoring.

The following options are now available in the More action icon menu of each deployed cluster:

  • Add cluster to Lens

  • Open cluster in Lens

New bootstrap node for additional regional clusters

Added the possibility to use a new bootstrap node for deployment of additional regional clusters. You can now deploy regional clusters not only on the bootstrap node where you originally deployed the related management cluster, but also on a new node.

TLS certificates for management cluster applications

Implemented the possibility to configure TLS certificates for Keycloak and Container Cloud web UI on new management clusters.

Caution

Adding of TLS certificates for Keycloak is not supported on existing clusters deployed using the Container Cloud release earlier than 2.9.0.

Default Keycloak authorization in Container Cloud web UI

For security reasons, updated the Keycloak authorization logic. The Keycloak single sign-on (SSO) feature that was optional in previous releases is now default and only possible login option for the Container Cloud web UI.

While you are logged in using the Keycloak SSO, you can:

  • Download a cluster kubeconfig without a password

  • Log in to an MKE cluster without having to sign in again

  • Use the StackLight endpoints without having to sign in again

Note

Keycloak is exposed using HTTPS with self-signed TLS certificates that are not trusted by web browsers.

To use your own TLS certificates for Keycloak, refer to Operations Guide: Configure TLS certificates for management cluster applications.

SSH keys management for mcc-user

Implemented management of SSH keys only for the universal mcc-user that is now applicable to any Container Cloud provider and node type, including Bastion. All existing SSH user names, such as ubuntu, cloud-user for the vSphere-based clusters, are replaced with the universal mcc-user user name.

Learn more

Deprecation notes

VMware vSphere resources controller

Implemented the vsphereResources controller to represent the vSphere resources as Kubernetes objects and manage them using the Container Cloud web UI.

You can now use the drop-down list fields to filter results by a short resource name during a cluster and machine creation. The drop-down lists for the following vSphere resources paths are added to the Container Cloud web UI:

  • Machine folder

  • Network

  • Resource pool

  • Datastore for the cluster

  • Datastore for the cloud provider

  • VM template

New format of L2 templates

Updated the L2 templates format for baremetal-based deployments. In the new format, l2template:status:npTemplate is used directly during provisioning. Therefore, a hardware node obtains and applies a complete network configuration during the first system boot.

Before the Container Cloud 2.9.0, you were able to configure any network interface except the default provisioning NIC for the PXE and LCM managed to manager connection. Since Container Cloud 2.9.0, you can configure any interface if required.

Caution

  • Deploy any new node using the L2 template of the new format.

  • Replace all deprecated L2 templates created before Container Cloud 2.9.0 with the L2 templates of new format.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.9.0 along with the Cluster releases 6.16.0 and 5.16.0.

For more issues addressed for the Cluster release 6.16.0, see also 2.8.0 addressed issues.

  • [14682][StackLight] Reduced the amount of KubePodNotReady and KubePodCrashLooping alerts. Reworked these alerts and renamed to KubePodsNotReady and KubePodsCrashLooping.

  • [14663][StackLight] Removed the inefficient Kubernetes API and etcd latency alerts.

  • [14458][vSphere] Fixed the issue with newly created pods failing to run and having the CrashLoopBackOff status on long-living vSphere-based clusters.

    The issue is fixed for new clusters deployed using Container Cloud 2.9.0. For existing clusters, apply the workaround described in vSphere known issues.

  • [14051][Ceph] Fixed the issue with the CephCluster creation failure if manageOsds was enabled before deploy.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.9.0 including the Cluster release 5.16.0 and 6.16.0.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


AWS
[8013] Managed cluster deployment requiring PVs may fail

Fixed in the Cluster release 7.0.0

Note

The issue below affects only the Kubernetes 1.18 deployments. Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

On a management cluster with multiple AWS-based managed clusters, some clusters fail to complete the deployments that require persistent volumes (PVs), for example, Elasticsearch. Some of the affected pods get stuck in the Pending state with the pod has unbound immediate PersistentVolumeClaims and node(s) had volume node affinity conflict errors.

Warning

The workaround below applies to HA deployments where data can be rebuilt from replicas. If you have a non-HA deployment, back up any existing data before proceeding, since all data will be lost while applying the workaround.

Workaround:

  1. Obtain the persistent volume claims related to the storage mounts of the affected pods:

    kubectl get pod/<pod_name1> pod/<pod_name2> \
    -o jsonpath='{.spec.volumes[?(@.persistentVolumeClaim)].persistentVolumeClaim.claimName}'
    

    Note

    In the command above and in the subsequent steps, substitute the parameters enclosed in angle brackets with the corresponding values.

  2. Delete the affected Pods and PersistentVolumeClaims to reschedule them: For example, for StackLight:

    kubectl -n stacklight delete \
    
      pod/<pod_name1> pod/<pod_name2> ...
      pvc/<pvc_name2> pvc/<pvc_name2> ...
    


vSphere
[15698] VIP is assigned to each manager node instead of a single node

Fixed in 2.11.0

A load balancer virtual IP address (VIP) is assigned to each manager node on any type of the vSphere-based cluster. The issue occurs because the Keepalived instances cannot set up a cluster due to the blocked vrrp protocol traffic in the firewall configuration on the Container Cloud nodes.

Note

Before applying the workaround below, verify that the dedicated vSphere network does not have any other virtual machines with the keepalived instance running with the same vrouter_id.

You can verify the vrouter_id value of the cluster in /etc/keepalived/keepalived.conf on the manager nodes.

Workaround

Update the firewalld configuration on each manager node of the affected cluster to allow the vrrp protocol traffic between the nodes:

  1. SSH to any manager node using mcc-user.

  2. Apply the firewalld configuration:

    firewall-cmd --add-rich-rule='rule protocol value="vrrp" accept' --permanent
    firewall-cmd --reload
    
  3. Apply the procedure to the remaining manager nodes of the cluster.


[14080] Node leaves the cluster after IP address change

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

A vSphere-based management cluster bootstrap fails due to a node leaving the cluster after an accidental IP address change.

The issue may affect a vSphere-based cluster only when IPAM is not enabled and IP addresses assignment to the vSphere virtual machines is done by a DHCP server present in the vSphere network.

By default, a DHCP server keeps lease of the IP address for 30 minutes. Usually, a VM dhclient prolongs such lease by frequent DHCP requests to the server before the lease period ends. The DHCP prolongation request period is always less than the default lease time on the DHCP server, so prolongation usually works. But in case of network issues, for example, when dhclient from the VM cannot reach the DHCP server, or the VM is being slowly powered on for more than the lease time, such VM may lose its assigned IP address. As a result, it obtains a new IP address.

Container Cloud does not support network reconfiguration after the IP of the VM has been changed. Therefore, such issue may lead to a VM leaving the cluster.

Symptoms:

  • One of the nodes is in the NodeNotReady or down state:

    kubectl get nodes -o wide
    docker node ls
    
  • The UCP Swarm manager logs on the healthy manager node contain the following example error:

    docker logs -f ucp-swarm-manager
    
    level=debug msg="Engine refresh failed" id="<docker node ID>|<node IP>: 12376"
    
  • If the affected node is manager:

    • The output of the docker info command contains the following example error:

      Error: rpc error: code = Unknown desc = The swarm does not have a leader. \
      It's possible that too few managers are online. \
      Make sure more than half of the managers are online.
      
    • The UCP controller logs contain the following example error:

      docker logs -f ucp-controller
      
      "warning","msg":"Node State Active check error: \
      Swarm Mode Manager health check error: \
      info: Cannot connect to the Docker daemon at tcp://<node IP>:12376. \
      Is the docker daemon running?
      
  • On the affected node, the IP address on the first interface eth0 does not match the IP address configured in Docker. Verify the Node Address field in the output of the docker info command.

  • The following lines are present in /var/log/messages:

    dhclient[<pid>]: bound to <node IP> -- renewal in 1530 seconds
    

    If there are several lines where the IP is different, the node is affected.

Workaround:

Select from the following options:

  • Bind IP addresses for all machines to their MAC addresses on the DHCP server for the dedicated vSphere network. In this case, VMs receive only specified IP addresses that never change.

  • Remove the Container Cloud node IPs from the IP range on the DHCP server for the dedicated vSphere network and configure the first interface eth0 on VMs with a static IP address.

  • If a managed cluster is affected, redeploy it with IPAM enabled for new machines to be created and IPs to be assigned properly.

[14458] Failure to create a container for pod: cannot allocate memory

Fixed in 2.9.0 for new clusters

Newly created pods may fail to run and have the CrashLoopBackOff status on long-living Container Cloud clusters deployed on RHEL 7.8 using the VMware vSphere provider. The following is an example output of the kubectl describe pod <pod-name> -n <projectName> command:

State:        Waiting
Reason:       CrashLoopBackOff
Last State:   Terminated
Reason:       ContainerCannotRun
Message:      OCI runtime create failed: container_linux.go:349:
              starting container process caused "process_linux.go:297:
              applying cgroup configuration for process caused
              "mkdir /sys/fs/cgroup/memory/kubepods/burstable/<pod-id>/<container-id>>:
              cannot allocate memory": unknown

The issue occurs due to the Kubernetes and Docker community issues.

According to the RedHat solution, the workaround is to disable the kernel memory accounting feature by appending cgroup.memory=nokmem to the kernel command line.

Note

The workaround below applies to the existing clusters only. The issue is resolved for new Container Cloud 2.9.0 deployments since the workaround below automatically applies to the VM template built during the vSphere-based management cluster bootstrap.

Apply the following workaround on each machine of the affected cluster.

Workaround

  1. SSH to any machine of the affected cluster using mcc-user and the SSH key provided during the cluster creation to proceed as the root user.

  2. In /etc/default/grub, set cgroup.memory=nokmem for GRUB_CMDLINE_LINUX.

  3. Update kernel:

    yum install kernel kernel-headers kernel-tools kernel-tools-libs kexec-tools
    
  4. Update the grub configuration:

    grub2-mkconfig -o /boot/grub2/grub.cfg
    
  5. Reboot the machine.

  6. Wait for the machine to become available.

  7. Wait for 5 minutes for Docker and Kubernetes services to start.

  8. Verify that the machine is Ready:

    docker node ls
    kubectl get nodes
    
  9. Repeat the steps above on the remaining machines of the affected cluster.



OpenStack
[10424] Regional cluster cleanup fails by timeout

An OpenStack-based regional cluster cleanup fails with the timeout error.

Workaround:

  1. Wait for the Cluster object to be deleted in the bootstrap cluster:

    kubectl --kubeconfig <(./bin/kind get kubeconfig --name clusterapi) get cluster
    

    The system output must be empty.

  2. Remove the bootstrap cluster manually:

    ./bin/kind delete cluster --name clusterapi
    


Equinix Metal
[14981] Equinix Metal machine is stuck in Deploy stage

Fixed in 2.10.0

An Equinix Metal manager machine deployment may fail if the cluster contains at least one manager machine that is stuck in the Provisioning state due to the capacity limits in the selected Equinix Metal data center. In this case, other machines that were successfully created in Equinix Metal may also fail to finalize the deployment and get stuck on the Deploy stage. If this is the case, remove all manager machines that are stuck in the Provisioning state.

Workaround:

  1. Export the kubeconfig of the management cluster. For example:

    export KUBECONFIG=~/Downloads/kubeconfig-test-mgmt.yml
    
  2. Add the kaas.mirantis.com/validate: "false" annotation to all machines that are stuck in the Provisioning state.

    Note

    In the commands below, replace $MACHINE_PROJECT_NAME and $MACHINE_NAME with the cluster project name and name of the affected machine respectively:

    kubectl -n $MACHINE_PROJECT_NAME annotate machine $MACHINE_NAME kaas.mirantis.com/validate="false"
    
  3. Remove the machine that is stuck in the Provisioning state using the Container Cloud web UI or using the following command:

    kubectl -n $MACHINE_PROJECT_NAME delete machine $MACHINE_NAME
    

After all machines that are stuck in the Provisioning state are removed, the deployment of the manager machine that is stuck on the Deploy stage restores.



Bare metal
[14642] Ironic logs overflow the storage volume

On the baremetal-based management clusters with the Cluster version 2.9.0 or earlier, the storage volume used by Ironic can run out of free space. As a result, an automatic upgrade of the management cluster fails with the no space left on device error in the Ironic logs.

Symptoms:

  • The httpd Deployment and the ironic and dnsmasq statefulSets are not in the OK status:

    kubectl -n kaas get deployments
    kubectl -n kaas get statefulsets
    
  • One or more of the httpd, ironic, and dnsmasq pods fail to start:

    kubectl get pods -n kaas -o wide | grep httpd-0
    

    If the number of ready containers for the pod is 0/1, the management cluster can be affected by the issue.

    kubectl get pods -n kaas -o wide | grep ironic
    

    If the number of ready containers for the pod is not 6/6, the management cluster can be affected by the issue.

  • Logs of the affected pods contain the no space left on device error:

    kubectl -n kaas logs httpd-0 | grep -i 'no space left on device'
    

As a workaround, truncate the Ironic log files on the storage volume:

kubectl -n kaas exec -ti sts/httpd -- /bin/bash -c 'truncate -s 0 /volume/log/ironic/ironic-api.log'
kubectl -n kaas exec -ti sts/httpd -- /bin/bash -c 'truncate -s 0 /volume/log/ironic/ironic-conductor.log'
kubectl -n kaas exec -ti sts/httpd -- /bin/bash -c 'truncate -s 0 /volume/log/ironic/ansible-playbook.log'
kubectl -n kaas exec -ti sts/httpd -- /bin/bash -c 'truncate -s 0 /volume/log/ironic-inspector/ironic-inspector.log'
kubectl -n kaas exec -ti sts/httpd -- /bin/bash -c 'truncate -s 0 /volume/log/dnsmasq/dnsmasq-dhcpd.log'

[7655] Wrong status for an incorrectly configured L2 template

Fixed in 2.11.0

If an L2 template is configured incorrectly, a bare metal cluster is deployed successfully but with the runtime errors in the IpamHost object.

Workaround:

If you suspect that the machine is not working properly because of incorrect network configuration, verify the status of the corresponding IpamHost object. Inspect the l2RenderResult and ipAllocationResult object fields for error messages.



Storage
[10050] Ceph OSD pod is in the CrashLoopBackOff state after disk replacement

Fixed in 2.11.0

If you use a custom BareMetalHostProfile, after disk replacement on a Ceph OSD, the Ceph OSD pod switches to the CrashLoopBackOff state due to the Ceph OSD authorization key failing to be created properly.

Workaround:

  1. Export kubeconfig of your managed cluster. For example:

    export KUBECONFIG=~/Downloads/kubeconfig-test-cluster.yml
    
  2. Log in to the ceph-tools pod:

    kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
    
  3. Delete the authorization key for the failed Ceph OSD:

    ceph auth del osd.<ID>
    
  4. SSH to the node on which the Ceph OSD cannot be created.

  5. Clean up the disk that will be a base for the failed Ceph OSD. For details, see official Rook documentation.

    Note

    Ignore failures of the sgdisk --zap-all $DISK and blkdiscard $DISK commands if any.

  6. On the managed cluster, restart Rook Operator:

    kubectl -n rook-ceph delete pod -l app=rook-ceph-operator
    


IAM
[13385] MariaDB pods fail to start after SST sync

Fixed in 2.12.0

The MariaDB pods fail to start after MariaDB blocks itself during the State Snapshot Transfers sync.

Workaround:

  1. Verify the failed pod readiness:

    kubectl describe pod -n kaas <failedMariadbPodName>
    

    If the readiness probe failed with the WSREP not synced message, proceed to the next step. Otherwise, assess the MariaDB pod logs to identify the failure root cause.

  2. Obtain the MariaDB admin password:

    kubectl get secret -n kaas mariadb-dbadmin-password -o jsonpath='{.data.MYSQL_DBADMIN_PASSWORD}' | base64 -d ; echo
    
  3. Verify that wsrep_local_state_comment is Donor or Desynced:

    kubectl exec -it -n kaas <failedMariadbPodName> -- mysql -uroot -p<mariadbAdminPassword> -e "SHOW status LIKE \"wsrep_local_state_comment\";"
    
  4. Restart the failed pod:

    kubectl delete pod -n kaas <failedMariadbPodName>
    


LCM
[13402] Cluster fails with error: no space left on device

Fixed in 2.8.0 for new clusters and in 2.10.0 for existing clusters

If an application running on a Container Cloud management or managed cluster fails frequently, for example, PostgreSQL, it may produce an excessive amount of core dumps. This leads to the no space left on device error on the cluster nodes and, as a result, to the broken Docker Swarm and the entire cluster.

Core dumps are disabled by default on the operating system of the Container Cloud nodes. But since Docker does not inherit the operating system settings, disable core dumps in Docker using the workaround below.

Warning

The workaround below does not apply to the baremetal-based clusters, including MOS deployments, since Docker restart may destroy the Ceph cluster.

Workaround:

  1. SSH to any machine of the affected cluster using mcc-user and the SSH key provided during the cluster creation.

  2. In /etc/docker/daemon.json, add the following parameters:

    {
        ...
        "default-ulimits": {
            "core": {
                "Hard": 0,
                "Name": "core",
                "Soft": 0
            }
        }
    }
    
  3. Restart the Docker daemon:

    systemctl restart docker
    
  4. Repeat the steps above on each machine of the affected cluster one by one.


[8367] Adding of a new manager node to a managed cluster hangs on Deploy stage

Fixed in 2.12.0

Adding of a new manager node to a managed cluster may hang due to issues with joining etcd from a new node to the existing etcd cluster. The new manager node hangs in the Deploy stage.

Symptoms:

  • The Ansible run tries executing the Wait for Docker UCP to be accessible step and fails with the following error message:

    Status code was -1 and not [200]: Request failed: <urlopen error [Errno 111] Connection refused>
    
  • The etcd logs on the leader etcd node contain the following example error message occurring every 1-2 minutes:

    2021-06-10 03:21:53.196677 W | etcdserver: not healthy for reconfigure,
    rejecting member add {ID:22bb1d4275f1c5b0 RaftAttributes:{PeerURLs:[https://<new manager IP>:12380]
    IsLearner:false} Attributes:{Name: ClientURLs:[]}}
    
    • To determine the etcd leader, run on any manager node:

      docker exec -it ucp-kv sh
      # From the inside of the container:
      ETCDCTL_API=3 etcdctl -w table --endpoints=https://<1st manager IP>:12379,https://<2nd manager IP>:12379,https://<3rd manager IP>:12379 endpoint status
      
    • To verify logs on the leader node:

      docker logs ucp-kv
      

Root cause:

In case of an unlucky network partition, the leader may lose quorum and members are not able to perform the election. For more details, see Official etcd documentation: Learning, figure 5.

Workaround:

  1. Restart etcd on the leader node:

    docker rm -f ucp-kv
    
  2. Wait several minutes until the etcd cluster starts and reconciles.

    The deployment of the new manager node will proceed and it will join the etcd cluster. After that, other MKE components will be configured and the node deployment will be finished successfully.


[13303] Managed cluster update fails with the Network is unreachable error

Fixed in 2.11

A managed cluster update from the Cluster release 6.12.0 to 6.14.0 fails with worker nodes being stuck in the Deploy state with the Network is unreachable error.

Workaround:

  1. Verify the state of the loopback network interface:

    ip l show lo
    

    If the interface is not in the UNKNOWN or UP state, enable it manually:

    ip l set lo up
    

    If the interface is in the UNKNOWN or UP state, assess the cluster logs to identify the failure root cause.

  2. Repeat the cluster update procedure.


[13845] Cluster update fails during the LCM Agent upgrade with x509 error

Fixed in 2.11.0

During update of a managed cluster from the Cluster releases 6.12.0 to 6.14.0, the LCM Agent upgrade fails with the following error in logs:

lcmAgentUpgradeStatus:
    error: 'failed to download agent binary: Get https://<mcc-cache-address>/bin/lcm/bin/lcm-agent/v0.2.0-289-gd7e9fa9c/lcm-agent:
      x509: certificate signed by unknown authority'

Only clusters initially deployed using Container Cloud 2.4.0 or earlier are affected.

As a workaround, restart lcm-agent using the service lcm-agent-* restart command on the affected nodes.


[6066] Helm releases get stuck in FAILED or UNKNOWN state

Note

The issue affects only Helm v2 releases and is addressed for Helm v3. Starting from Container Cloud 2.19.0, all Helm releases are switched to v3.

During a management, regional, or managed cluster deployment, Helm releases may get stuck in the FAILED or UNKNOWN state although the corresponding machines statuses are Ready in the Container Cloud web UI. For example, if the StackLight Helm release fails, the links to its endpoints are grayed out in the web UI. In the cluster status, providerStatus.helm.ready and providerStatus.helm.releaseStatuses.<releaseName>.success are false.

HelmBundle cannot recover from such states and requires manual actions. The workaround below describes the recovery steps for the stacklight release that got stuck during a cluster deployment. Use this procedure as an example for other Helm releases as required.

Workaround:

  1. Verify the failed release has the UNKNOWN or FAILED status in the HelmBundle object:

    kubectl --kubeconfig <regionalClusterKubeconfigPath> get helmbundle <clusterName> -n <clusterProjectName> -o=jsonpath={.status.releaseStatuses.stacklight}
    
    In the command above and in the steps below, replace the parameters
    enclosed in angle brackets with the corresponding values of your cluster.
    

    Example of system response:

    stacklight:
    attempt: 2
    chart: ""
    finishedAt: "2021-02-05T09:41:05Z"
    hash: e314df5061bd238ac5f060effdb55e5b47948a99460c02c2211ba7cb9aadd623
    message: '[{"occurrence":1,"lastOccurrenceDate":"2021-02-05 09:41:05","content":"error
      updating the release: rpc error: code = Unknown desc = customresourcedefinitions.apiextensions.k8s.io
      \"helmbundles.lcm.mirantis.com\" already exists"}]'
    notes: ""
    status: UNKNOWN
    success: false
    version: 0.1.2-mcp-398
    
  2. Log in to the helm-controller pod console:

    kubectl --kubeconfig <affectedClusterKubeconfigPath> exec -n kube-system -it helm-controller-0 sh -c tiller
    
  3. Download the Helm v3 binary. For details, see official Helm documentation.

  4. Remove the failed release:

    helm delete <failed-release-name>
    

    For example:

    helm delete stacklight
    

    Once done, the release triggers for redeployment.


[14125] Inaccurate nodes readiness status on a managed cluster

Fixed in 2.10.0

A managed cluster deployed or updated on a regional cluster of another provider type may display inaccurate Nodes readiness live status in the Container Cloud web UI. While all nodes are ready, the Nodes status indicates that some nodes are still not ready.

The issue occurs due to the cordon-drain desynchronization between the LCMClusterState objects and the actual state of the cluster.

Note

The workaround below must be applied only by users with the writer or cluster-admin access role assigned by the Infrastructure Operator.

To verify that the cluster is affected:

  1. Export the regional cluster kubeconfig created during the regional cluster deployment:

    export KUBECONFIG=<PathToRegionalClusterKubeconfig>
    
  2. Verify that all Kubernetes nodes of the affected managed cluster are in the ready state:

    kubectl --kubeconfig <managedClusterKubeconfigPath> get nodes
    
  3. Verify that all Swarm nodes of the managed cluster are in the ready state:

    ssh -i <sshPrivateKey> root@<controlPlaneNodeIP>
    
    docker node ls
    

    Replace the parameters enclosed in angle brackets with the SSH key that was used for the managed cluster deployment and the private IP address of any control plane node of the cluster.

    If the status of the Kubernetes and Swarm nodes is ready, proceed with the next steps. Otherwise, assess the cluster logs to identify the issue with not ready nodes.

  4. Obtain the LCMClusterState items related to the swarm-drain and cordon-drain type:

    kubectl get lcmlusterstates -n <managedClusterProjectName>
    

    The command above outputs the list of all LCMClusterState items. Verify only the LCMClusterState items names that start with the swarm-drain- and cordon-drain- prefix.

  5. Verify the status of each LCMClusterState item of the swarm-drain and cordon-drain type:

    kubectl -n <clusterProjectName>  get lcmlusterstates <lcmlusterstatesItemNameOfSwarmDrainOrCordonDrainType> -o=yaml
    

    Example of system response extract for the LCMClusterState items of the cordon-drain type:

    spec:
     arg: kaas-node-4c026e7a-8acd-48b2-bf5c-cdeaf99d812f
     clusterName: test-child-namespace
     type: cordon-drain
     value: "false"
    status:
      attempt: 0
      value: "false"
    

    Example of system response extract for the LCMClusterState items of the swarm-drain type:

    spec:
      arg: kaas-node-4c026e7a-8acd-48b2-bf5c-cdeaf99d812f
      clusterName: test-child-namespace
      type: swarm-drain
      value: "true"
    status:
      attempt: 334
      message: 'Error: waiting for kubernetes node kaas-node-4c026e7a-8acd-48b2-bf5c-cdeaf99d812f
        to be drained first'
    

    The cluster is affected if:

    • For cordon-drain, spec.value and status.value are "false"

    • For swarm-drain, spec.value is "true" and the status.message contains an error related to waiting for the Kubernetes cordon-drain to finish

Workaround:

For each LCMClusterState item of the swarm-drain type with spec.value == "true" and the status.message described above, replace "true" with "false" in spec.value:

kubectl -n <clusterProjectName> edit lcmclusterstate <lcmlusterstatesItemNameOfSwarmDrainType>


Upgrade
[15419] The iam-api pods are not ready after cluster upgrade

The iam-api pods are in the Not Ready state on the management cluster after the Container Cloud upgrade to 2.9.0 since they cannot reach Keycloak due to the CA certificate issue.

The issue affects only the clusters originally deployed using the Container Cloud release earlier than 2.6.0.

Workaround:

  1. Replace the tls.crt and tls.key fields in the mcc-ca-cert secret in the kaas namespace with the certificate and key generated during the management cluster bootstrap. These credentials are stored in the kaas-bootstrap/tls directory.

    kubectl -n kaas delete secret mcc-ca-cert && kubectl create secret generic mcc-ca-cert -n kaas --dry-run=client --from-file=tls.key=./kaas-bootstrap/tls/ca-key.pem --from-file=tls.crt=./kaas-bootstrap/tls/ca.pem -o yaml | kubectl apply -f -
    
  2. Wait for the oidc-ca-cert secret in the kaas namespace to be updated with the certificate from the mcc-ca-cert secret in the kaas namespace.

  3. Restart the iam-api pods:

    kubectl -n kaas rollout restart deployment iam-api
    

[9899] Helm releases get stuck in PENDING_UPGRADE during cluster update

Fixed in 2.14.0

Helm releases may get stuck in the PENDING_UPGRADE status during a management or managed cluster upgrade. The HelmBundle Controller cannot recover from this state and requires manual actions. The workaround below describes the recovery process for the openstack-operator release that stuck during a managed cluster update. Use it as an example for other Helm releases as required.

Workaround:

  1. Log in to the helm-controller pod console:

    kubectl exec -n kube-system -it helm-controller-0 sh -c tiller
    
  2. Identify the release that stuck in the PENDING_UPGRADE status. For example:

    ./helm --host=localhost:44134 history openstack-operator
    

    Example of system response:

    REVISION  UPDATED                   STATUS           CHART                      DESCRIPTION
    1         Tue Dec 15 12:30:41 2020  SUPERSEDED       openstack-operator-0.3.9   Install complete
    2         Tue Dec 15 12:32:05 2020  SUPERSEDED       openstack-operator-0.3.9   Upgrade complete
    3         Tue Dec 15 16:24:47 2020  PENDING_UPGRADE  openstack-operator-0.3.18  Preparing upgrade
    
  3. Roll back the failed release to the previous revision:

    1. Download the Helm v3 binary. For details, see official Helm documentation.

    2. Roll back the failed release:

      helm rollback <failed-release-name>
      

      For example:

      helm rollback openstack-operator 2
      

    Once done, the release will be reconciled.


[14152] Managed cluster upgrade fails due to DNS issues

Fixed in 2.10.0

A managed cluster release upgrade may fail due to DNS issues on pods with host networking. If this is the case, the DNS names of the Kubernetes services on the affected pod cannot be resolved.

Workaround:

  1. Export kubeconfig of the affected managed cluster. For example:

    export KUBECONFIG=~/Downloads/kubeconfig-test-cluster.yml
    
  2. Identify any existing pod with host networking. For example, tf-config-xxxxxx:

    kubectl get pods -n tf -l app=tf-config
    
  3. Verify the DNS names resolution of the Kubernetes services from this pod. For example:

    kubectl -n tf exec -it tf-config-vl4mh -c svc-monitor -- curl -k https://kubernetes.default.svc
    

    The system output must not contain DNS errors.

  4. If the DNS name cannot be resolved, restart all calico-node pods:

    kubectl delete pods -l k8s-app=calico-node -n kube-system
    


Container Cloud web UI
[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.9.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.22.4

aws-credentials-controller

1.22.4

Bare metal

baremetal-operator Updated

5.0.2

baremetal-public-api Updated

5.0.2

baremetal-provider Updated

1.22.4

httpd

1.18.0

ironic

victoria-bionic-20210408180013

ironic-operator Updated

base-bionic-20210513142132

kaas-ipam

base-bionic-20210427213631

local-volume-provisioner Updated

1.0.6-mcp

mariadb

10.4.17-bionic-20210203155435

IAM

iam Updated

2.4.0

iam-controller Updated

1.22.4

keycloak

12.0.0

Container Cloud

admission-controller Updated

1.22.4

byo-credentials-controller Updated

1.22.4

byo-provider Updated

1.22.4

kaas-public-api Updated

1.22.4

kaas-exporter Updated

1.22.4

kaas-ui Updated

1.22.4

lcm-controller Updated

0.2.0-351-g3151d0cd

mcc-cache Updated

1.22.4

proxy-controller Updated

1.22.4

release-controller Updated

1.22.4

rhellicense-controller Updated

1.22.4

squid-proxy

0.0.1-3

Equinix Metal New

equinix-provider

1.22.5

equinix-credentials-controller

1.22.4

OpenStack Updated

openstack-provider

1.22.4

os-credentials-controller

1.22.4

VMware vSphere Updated

vsphere-provider

1.22.4

vsphere-credentials-controller

1.22.4

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.9.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-5.0.2.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-5.0.2.tgz

ironic-python-agent-bionic.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-victoria-bionic-debug-20210226182519

ironic-python-agent-bionic.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-victoria-bionic-debug-20210226182519

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-5.0.2.tgz

local-volume-provisioner Updated

https://binary.mirantis.com/bm/helm/local-volume-provisioner-1.0.6-mcp.tgz

Docker images

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-bionic-20210513173947

httpd

mirantis.azurecr.io/lcm/nginx:1.18.0

ironic

mirantis.azurecr.io/openstack/ironic:victoria-bionic-20210408180013

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:victoria-bionic-20210408180013

ironic-operator Updated

mirantis.azurecr.io/bm/ironic-operator:base-bionic-20210513142132

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-bionic-20210427213631

mariadb

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20210203155435


Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.22.4.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.22.4.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.22.4.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.22.4.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.22.4.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.22.4.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.22.4.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.22.4.tgz

equinix-credentials-controller New

https://binary.mirantis.com/core/helm/equinix-credentials-controller-1.22.4.tgz

equinix-provider New

https://binary.mirantis.com/core/helm/equinix-provider-1.22.5.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.22.4.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.22.4.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.22.4.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.22.4.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.22.4.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.22.4.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.22.4.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.22.4.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.22.4.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.22.4.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.22.4.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.22.4.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.22.4.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.22.4.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.22.4

aws-cluster-api-controller Updated

mirantis.azurecr.io/core/aws-cluster-api-controller:1.22.4

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.22.4

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.22.4

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.22.4

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.22.4

cluster-api-provider-equinix New

mirantis.azurecr.io/core/cluster-api-provider-equinix:1.22.5

equinix-credentials-controller New

mirantis.azurecr.io/core/equinix-credentials-controller:1.22.4

frontend Updated

mirantis.azurecr.io/core/frontend:1.22.4

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.22.4

kproxy Updated

mirantis.azurecr.io/lcm/kproxy:1.22.4

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:v0.2.0-351-g3151d0cd

nginx

mirantis.azurecr.io/lcm/nginx:1.18.0

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.22.4

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.22.4

registry

mirantis.azurecr.io/lcm/registry:2.7.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.22.4

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.22.4

squid-proxy

mirantis.azurecr.io/core/squid-proxy:0.0.1-3

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-api-controller:1.22.4

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.22.4


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

iamctl-linux

http://binary.mirantis.com/iam/bin/iamctl-0.5.1-linux

iamctl-darwin

http://binary.mirantis.com/iam/bin/iamctl-0.5.1-darwin

iamctl-windows

http://binary.mirantis.com/iam/bin/iamctl-0.5.1-windows

Helm charts

iam Updated

http://binary.mirantis.com/iam/helm/iam-2.4.0.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

keycloak-proxy Updated

http://binary.mirantis.com/core/helm/keycloak_proxy-1.22.4.tgz

Docker images

api

mirantis.azurecr.io/iam/api:0.5.1

auxiliary

mirantis.azurecr.io/iam/auxiliary:0.5.1

kubernetes-entrypoint

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.0-20200311160233

mariadb

mirantis.azurecr.io/general/mariadb:10.4.16-bionic-20201105025052

keycloak

mirantis.azurecr.io/iam/keycloak:0.4.0

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:6.0.1

Switch L2 templates to the new format

Before the Container Cloud 2.9.0, you were able to configure any network interface except the default provisioning NIC for the PXE and LCM managed to manager connection. Since Container Cloud 2.9.0, you can configure any interface if required.

Caution

  • Deploy any new node using the updated L2 template format.

  • All L2 templates created before Container Cloud 2.9.0 are now deprecated and must not be used.

In the old L2 templates format, ipamhost spawns 2 structures after processing l2template for machines:

  • l2template:status:osMetadataNetwork that renders automatically using the default subnet from the management cluster and is used during the cloud-init deployment phase after provisioning is done

  • l2template:status:npTemplate that is used during the lcm-agent deployment phase and applied after lcmmachine starts deployment

In the new L2 templates format, l2template:status:npTemplate is used directly during provisioning. Therefore, a hardware node obtains and applies a complete network configuration during the first system boot.

To switch to the new L2 template format:

  1. If you do not have a subnet for connection to the management LCM cluster network (lcm-nw), manually create one. For details, see Operations Guide: Create subnets.

  2. Manually create a new L2 template that is based on your existing one. For details, see Operations Guide: Create L2 templates.

  3. In the npTemplate section, add the {{ nic 0}} parameters for the lcm-nw network.

    Configuration example:

    apiVersion: ipam.mirantis.com/v1alpha1
    kind: L2Template
    metadata:
      labels:
        bm-1490-template-controls-netplan: anymagicstring
        cluster.sigs.k8s.io/cluster-name: child-cluster
        kaas.mirantis.com/provider: baremetal
        kaas.mirantis.com/region: region-one
      name: bm-1490-template-controls-netplan
      namespace: child-ns
    spec:
      l3Layout:
        - subnetName: lcm-nw
          scope:      namespace
      ifMapping:
        - enp9s0f0
        - enp9s0f1
        - eno1
        - ens3f1
      npTemplate: |-
        version: 2
        ethernets:
          {{nic 0}}:
            dhcp4: false
            dhcp6: false
            match:
              macaddress: {{mac 0}}
            mtu: 1500
            nameservers:
              addresses: [ 172.18.176.6 ]
            # Name if mandatory
            set-name: "k8s-lcm"
            gateway4: {{ gateway_from_subnet "lcm-nw" }}
            addresses:
              - {{ ip "0:lcm-nw" }}
          {{nic 1}}:
            dhcp4: false
            dhcp6: false
            match:
              macaddress: {{mac 1}}
            set-name: {{nic 1}}
            mtu: 1500
         ....
         ....
    

    Note

    In the previous L2 template format, {{ nic 0}} for the PXE interface was not defined.

After switching to the new l2template format, the following info message appears in the ipamhost status and indicates that bmh successfully migrated to the new format of L2 templates:

KUBECONFIG=kubeconfig kubectl -n managed-ns get ipamhosts
NAME               STATUS                                                                       AGE   REGION
cz7700-bmh         L2Template + L3Layout used, osMetadataNetwork is unacceptable in this mode   49m   region-one
2.8.0

The Mirantis Container Cloud GA release 2.8.0:

  • Introduces support for the Cluster release 5.15.0 that is based on Kubernetes 1.18, Mirantis Container Runtime 19.03.14, and Mirantis Kubernetes Engine 3.3.6.

  • Supports the Cluster release 6.14.0 that is based on the Cluster release 5.14.0 and represents Mirantis OpenStack for Kubernetes (MOS) 21.2.

  • Supports deprecated Cluster releases 5.14.0 and 6.12.0 that will become unsupported in one of the following Container Cloud releases.

  • Supports the Cluster release 5.11.0 only for attachment of existing MKE 3.3.4 clusters. For the deployment of new or attachment of existing MKE 3.3.6 clusters, the latest available Cluster release is used.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.8.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.8.0. For the list of enhancements in the Cluster release 5.15.0 and Cluster release 6.14.0 that are supported by the Container Cloud release 2.8.0, see the 5.15.0 and 6.14.0 sections.


Support for Keycloak 12.0

Updated the Keycloak major version from 9.0 to 12.0. For the list of highlights and enhancements, see Official Keycloak documentation.

Ironic pod logs

TECHNOLOGY PREVIEW

Implemented the possibility to collect logs of the syslog container that runs in the Ironic pod on the bare metal bootstrap, management, and managed clusters.

You can collect Ironic pod logs using the standard Container Cloud container-cloud collect logs command. The output is located in /objects/namespaced/<namespaceName>/core/pods/<ironicPodId>/syslog.log. To simplify operations with logs, the syslog container generates output in the JSON format.

Note

Logs collected by the syslog container during the bootstrap phase are not transferred to the management cluster during pivoting. These logs are located in /volume/log/ironic/ansible_conductor.log inside the Ironic pod.

LoadBalancer and ProviderInstance monitoring for cluster and machine statuses

Improved monitoring of the cluster and machine live statuses in the Container Cloud web UI:

  • Added the LoadBalancer and ProviderInstance fields.

  • Added the providerInstanceState field for an AWS machine status that includes the AWS VM ID, state, and readiness. The analogous fields instanceState and instanceID are deprecated as of Container Cloud 2.8.0 and will be removed in one of the following releases. For details, see Deprecation notes.

Updated notification about outdated cluster version in web UI

Updated the notification about outdated cluster version in the Container Cloud web UI. Now, you will be notified about any outdated managed cluster that must be updated to unblock the upgrade of the management cluster and Container Cloud to the latest version.

Caution

Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.8.0 along with the Cluster release 5.15.0:

  • [12723] [Ceph] Fixed the issue with the ceph_role_mon and ceph_role_mgr labels remaining after deletion of a node from KaaSCephCluster.

  • [13381] [LCM] Fixed the issue with requests to apiserver failing after bootstrap on the management and regional clusters with enabled proxy.

  • [13402] [LCM] Fixed the issue with the cluster failing with the no space left on device error due to an excessive amount of core dumps produced by applications that fail frequently.

    Note

    The issue is addressed only for new clusters created using Container Cloud 2.8.0. To workaround the issue on existing clusters created using the Container Cloud version below 2.8.0, see LCM known issues: 13402.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.8.0 including the Cluster release 5.15.0 and 6.14.0.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


AWS
[8013] Managed cluster deployment requiring PVs may fail

Fixed in the Cluster release 7.0.0

Note

The issue below affects only the Kubernetes 1.18 deployments. Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

On a management cluster with multiple AWS-based managed clusters, some clusters fail to complete the deployments that require persistent volumes (PVs), for example, Elasticsearch. Some of the affected pods get stuck in the Pending state with the pod has unbound immediate PersistentVolumeClaims and node(s) had volume node affinity conflict errors.

Warning

The workaround below applies to HA deployments where data can be rebuilt from replicas. If you have a non-HA deployment, back up any existing data before proceeding, since all data will be lost while applying the workaround.

Workaround:

  1. Obtain the persistent volume claims related to the storage mounts of the affected pods:

    kubectl get pod/<pod_name1> pod/<pod_name2> \
    -o jsonpath='{.spec.volumes[?(@.persistentVolumeClaim)].persistentVolumeClaim.claimName}'
    

    Note

    In the command above and in the subsequent steps, substitute the parameters enclosed in angle brackets with the corresponding values.

  2. Delete the affected Pods and PersistentVolumeClaims to reschedule them: For example, for StackLight:

    kubectl -n stacklight delete \
    
      pod/<pod_name1> pod/<pod_name2> ...
      pvc/<pvc_name2> pvc/<pvc_name2> ...
    


vSphere
[15698] VIP is assigned to each manager node instead of a single node

Fixed in 2.11.0

A load balancer virtual IP address (VIP) is assigned to each manager node on any type of the vSphere-based cluster. The issue occurs because the Keepalived instances cannot set up a cluster due to the blocked vrrp protocol traffic in the firewall configuration on the Container Cloud nodes.

Note

Before applying the workaround below, verify that the dedicated vSphere network does not have any other virtual machines with the keepalived instance running with the same vrouter_id.

You can verify the vrouter_id value of the cluster in /etc/keepalived/keepalived.conf on the manager nodes.

Workaround

Update the firewalld configuration on each manager node of the affected cluster to allow the vrrp protocol traffic between the nodes:

  1. SSH to any manager node using mcc-user.

  2. Apply the firewalld configuration:

    firewall-cmd --add-rich-rule='rule protocol value="vrrp" accept' --permanent
    firewall-cmd --reload
    
  3. Apply the procedure to the remaining manager nodes of the cluster.


[14080] Node leaves the cluster after IP address change

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

A vSphere-based management cluster bootstrap fails due to a node leaving the cluster after an accidental IP address change.

The issue may affect a vSphere-based cluster only when IPAM is not enabled and IP addresses assignment to the vSphere virtual machines is done by a DHCP server present in the vSphere network.

By default, a DHCP server keeps lease of the IP address for 30 minutes. Usually, a VM dhclient prolongs such lease by frequent DHCP requests to the server before the lease period ends. The DHCP prolongation request period is always less than the default lease time on the DHCP server, so prolongation usually works. But in case of network issues, for example, when dhclient from the VM cannot reach the DHCP server, or the VM is being slowly powered on for more than the lease time, such VM may lose its assigned IP address. As a result, it obtains a new IP address.

Container Cloud does not support network reconfiguration after the IP of the VM has been changed. Therefore, such issue may lead to a VM leaving the cluster.

Symptoms:

  • One of the nodes is in the NodeNotReady or down state:

    kubectl get nodes -o wide
    docker node ls
    
  • The UCP Swarm manager logs on the healthy manager node contain the following example error:

    docker logs -f ucp-swarm-manager
    
    level=debug msg="Engine refresh failed" id="<docker node ID>|<node IP>: 12376"
    
  • If the affected node is manager:

    • The output of the docker info command contains the following example error:

      Error: rpc error: code = Unknown desc = The swarm does not have a leader. \
      It's possible that too few managers are online. \
      Make sure more than half of the managers are online.
      
    • The UCP controller logs contain the following example error:

      docker logs -f ucp-controller
      
      "warning","msg":"Node State Active check error: \
      Swarm Mode Manager health check error: \
      info: Cannot connect to the Docker daemon at tcp://<node IP>:12376. \
      Is the docker daemon running?
      
  • On the affected node, the IP address on the first interface eth0 does not match the IP address configured in Docker. Verify the Node Address field in the output of the docker info command.

  • The following lines are present in /var/log/messages:

    dhclient[<pid>]: bound to <node IP> -- renewal in 1530 seconds
    

    If there are several lines where the IP is different, the node is affected.

Workaround:

Select from the following options:

  • Bind IP addresses for all machines to their MAC addresses on the DHCP server for the dedicated vSphere network. In this case, VMs receive only specified IP addresses that never change.

  • Remove the Container Cloud node IPs from the IP range on the DHCP server for the dedicated vSphere network and configure the first interface eth0 on VMs with a static IP address.

  • If a managed cluster is affected, redeploy it with IPAM enabled for new machines to be created and IPs to be assigned properly.

[14458] Failure to create a container for pod: cannot allocate memory

Fixed in 2.9.0 for new clusters

Newly created pods may fail to run and have the CrashLoopBackOff status on long-living Container Cloud clusters deployed on RHEL 7.8 using the VMware vSphere provider. The following is an example output of the kubectl describe pod <pod-name> -n <projectName> command:

State:        Waiting
Reason:       CrashLoopBackOff
Last State:   Terminated
Reason:       ContainerCannotRun
Message:      OCI runtime create failed: container_linux.go:349:
              starting container process caused "process_linux.go:297:
              applying cgroup configuration for process caused
              "mkdir /sys/fs/cgroup/memory/kubepods/burstable/<pod-id>/<container-id>>:
              cannot allocate memory": unknown

The issue occurs due to the Kubernetes and Docker community issues.

According to the RedHat solution, the workaround is to disable the kernel memory accounting feature by appending cgroup.memory=nokmem to the kernel command line.

Note

The workaround below applies to the existing clusters only. The issue is resolved for new Container Cloud 2.9.0 deployments since the workaround below automatically applies to the VM template built during the vSphere-based management cluster bootstrap.

Apply the following workaround on each machine of the affected cluster.

Workaround

  1. SSH to any machine of the affected cluster using mcc-user and the SSH key provided during the cluster creation to proceed as the root user.

  2. In /etc/default/grub, set cgroup.memory=nokmem for GRUB_CMDLINE_LINUX.

  3. Update kernel:

    yum install kernel kernel-headers kernel-tools kernel-tools-libs kexec-tools
    
  4. Update the grub configuration:

    grub2-mkconfig -o /boot/grub2/grub.cfg
    
  5. Reboot the machine.

  6. Wait for the machine to become available.

  7. Wait for 5 minutes for Docker and Kubernetes services to start.

  8. Verify that the machine is Ready:

    docker node ls
    kubectl get nodes
    
  9. Repeat the steps above on the remaining machines of the affected cluster.



OpenStack
[10424] Regional cluster cleanup fails by timeout

An OpenStack-based regional cluster cleanup fails with the timeout error.

Workaround:

  1. Wait for the Cluster object to be deleted in the bootstrap cluster:

    kubectl --kubeconfig <(./bin/kind get kubeconfig --name clusterapi) get cluster
    

    The system output must be empty.

  2. Remove the bootstrap cluster manually:

    ./bin/kind delete cluster --name clusterapi
    


Bare metal
[7655] Wrong status for an incorrectly configured L2 template

Fixed in 2.11.0

If an L2 template is configured incorrectly, a bare metal cluster is deployed successfully but with the runtime errors in the IpamHost object.

Workaround:

If you suspect that the machine is not working properly because of incorrect network configuration, verify the status of the corresponding IpamHost object. Inspect the l2RenderResult and ipAllocationResult object fields for error messages.



Storage
[14051] CephCluster creation fails if manageOsds is enabled before deploy

Fixed in 2.9.0

If manageOsds is enabled in the pre-deployment KaaSCephCluster template, the bare metal management or managed cluster fails to deploy due to the CephCluster creation failure.

As a workaround, disable manageOsds in the KaaSCephCluster template before the cluster deployment. You can enable this parameter after deployment as described in Ceph advanced configuration.

[10050] Ceph OSD pod is in the CrashLoopBackOff state after disk replacement

Fixed in 2.11.0

If you use a custom BareMetalHostProfile, after disk replacement on a Ceph OSD, the Ceph OSD pod switches to the CrashLoopBackOff state due to the Ceph OSD authorization key failing to be created properly.

Workaround:

  1. Export kubeconfig of your managed cluster. For example:

    export KUBECONFIG=~/Downloads/kubeconfig-test-cluster.yml
    
  2. Log in to the ceph-tools pod:

    kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
    
  3. Delete the authorization key for the failed Ceph OSD:

    ceph auth del osd.<ID>
    
  4. SSH to the node on which the Ceph OSD cannot be created.

  5. Clean up the disk that will be a base for the failed Ceph OSD. For details, see official Rook documentation.

    Note

    Ignore failures of the sgdisk --zap-all $DISK and blkdiscard $DISK commands if any.

  6. On the managed cluster, restart Rook Operator:

    kubectl -n rook-ceph delete pod -l app=rook-ceph-operator
    


IAM
[13385] MariaDB pods fail to start after SST sync

Fixed in 2.12.0

The MariaDB pods fail to start after MariaDB blocks itself during the State Snapshot Transfers sync.

Workaround:

  1. Verify the failed pod readiness:

    kubectl describe pod -n kaas <failedMariadbPodName>
    

    If the readiness probe failed with the WSREP not synced message, proceed to the next step. Otherwise, assess the MariaDB pod logs to identify the failure root cause.

  2. Obtain the MariaDB admin password:

    kubectl get secret -n kaas mariadb-dbadmin-password -o jsonpath='{.data.MYSQL_DBADMIN_PASSWORD}' | base64 -d ; echo
    
  3. Verify that wsrep_local_state_comment is Donor or Desynced:

    kubectl exec -it -n kaas <failedMariadbPodName> -- mysql -uroot -p<mariadbAdminPassword> -e "SHOW status LIKE \"wsrep_local_state_comment\";"
    
  4. Restart the failed pod:

    kubectl delete pod -n kaas <failedMariadbPodName>
    


LCM
[13402] Cluster fails with error: no space left on device

Fixed in 2.8.0 for new clusters and in 2.10.0 for existing clusters

If an application running on a Container Cloud management or managed cluster fails frequently, for example, PostgreSQL, it may produce an excessive amount of core dumps. This leads to the no space left on device error on the cluster nodes and, as a result, to the broken Docker Swarm and the entire cluster.

Core dumps are disabled by default on the operating system of the Container Cloud nodes. But since Docker does not inherit the operating system settings, disable core dumps in Docker using the workaround below.

Warning

The workaround below does not apply to the baremetal-based clusters, including MOS deployments, since Docker restart may destroy the Ceph cluster.

Workaround:

  1. SSH to any machine of the affected cluster using mcc-user and the SSH key provided during the cluster creation.

  2. In /etc/docker/daemon.json, add the following parameters:

    {
        ...
        "default-ulimits": {
            "core": {
                "Hard": 0,
                "Name": "core",
                "Soft": 0
            }
        }
    }
    
  3. Restart the Docker daemon:

    systemctl restart docker
    
  4. Repeat the steps above on each machine of the affected cluster one by one.


[13845] Cluster update fails during the LCM Agent upgrade with x509 error

Fixed in 2.11.0

During update of a managed cluster from the Cluster releases 6.12.0 to 6.14.0, the LCM Agent upgrade fails with the following error in logs:

lcmAgentUpgradeStatus:
    error: 'failed to download agent binary: Get https://<mcc-cache-address>/bin/lcm/bin/lcm-agent/v0.2.0-289-gd7e9fa9c/lcm-agent:
      x509: certificate signed by unknown authority'

Only clusters initially deployed using Container Cloud 2.4.0 or earlier are affected.

As a workaround, restart lcm-agent using the service lcm-agent-* restart command on the affected nodes.


[6066] Helm releases get stuck in FAILED or UNKNOWN state

Note

The issue affects only Helm v2 releases and is addressed for Helm v3. Starting from Container Cloud 2.19.0, all Helm releases are switched to v3.

During a management, regional, or managed cluster deployment, Helm releases may get stuck in the FAILED or UNKNOWN state although the corresponding machines statuses are Ready in the Container Cloud web UI. For example, if the StackLight Helm release fails, the links to its endpoints are grayed out in the web UI. In the cluster status, providerStatus.helm.ready and providerStatus.helm.releaseStatuses.<releaseName>.success are false.

HelmBundle cannot recover from such states and requires manual actions. The workaround below describes the recovery steps for the stacklight release that got stuck during a cluster deployment. Use this procedure as an example for other Helm releases as required.

Workaround:

  1. Verify the failed release has the UNKNOWN or FAILED status in the HelmBundle object:

    kubectl --kubeconfig <regionalClusterKubeconfigPath> get helmbundle <clusterName> -n <clusterProjectName> -o=jsonpath={.status.releaseStatuses.stacklight}
    
    In the command above and in the steps below, replace the parameters
    enclosed in angle brackets with the corresponding values of your cluster.
    

    Example of system response:

    stacklight:
    attempt: 2
    chart: ""
    finishedAt: "2021-02-05T09:41:05Z"
    hash: e314df5061bd238ac5f060effdb55e5b47948a99460c02c2211ba7cb9aadd623
    message: '[{"occurrence":1,"lastOccurrenceDate":"2021-02-05 09:41:05","content":"error
      updating the release: rpc error: code = Unknown desc = customresourcedefinitions.apiextensions.k8s.io
      \"helmbundles.lcm.mirantis.com\" already exists"}]'
    notes: ""
    status: UNKNOWN
    success: false
    version: 0.1.2-mcp-398
    
  2. Log in to the helm-controller pod console:

    kubectl --kubeconfig <affectedClusterKubeconfigPath> exec -n kube-system -it helm-controller-0 sh -c tiller
    
  3. Download the Helm v3 binary. For details, see official Helm documentation.

  4. Remove the failed release:

    helm delete <failed-release-name>
    

    For example:

    helm delete stacklight
    

    Once done, the release triggers for redeployment.


[14125] Inaccurate nodes readiness status on a managed cluster

Fixed in 2.10.0

A managed cluster deployed or updated on a regional cluster of another provider type may display inaccurate Nodes readiness live status in the Container Cloud web UI. While all nodes are ready, the Nodes status indicates that some nodes are still not ready.

The issue occurs due to the cordon-drain desynchronization between the LCMClusterState objects and the actual state of the cluster.

Note

The workaround below must be applied only by users with the writer or cluster-admin access role assigned by the Infrastructure Operator.

To verify that the cluster is affected:

  1. Export the regional cluster kubeconfig created during the regional cluster deployment:

    export KUBECONFIG=<PathToRegionalClusterKubeconfig>
    
  2. Verify that all Kubernetes nodes of the affected managed cluster are in the ready state:

    kubectl --kubeconfig <managedClusterKubeconfigPath> get nodes
    
  3. Verify that all Swarm nodes of the managed cluster are in the ready state:

    ssh -i <sshPrivateKey> root@<controlPlaneNodeIP>
    
    docker node ls
    

    Replace the parameters enclosed in angle brackets with the SSH key that was used for the managed cluster deployment and the private IP address of any control plane node of the cluster.

    If the status of the Kubernetes and Swarm nodes is ready, proceed with the next steps. Otherwise, assess the cluster logs to identify the issue with not ready nodes.

  4. Obtain the LCMClusterState items related to the swarm-drain and cordon-drain type:

    kubectl get lcmlusterstates -n <managedClusterProjectName>
    

    The command above outputs the list of all LCMClusterState items. Verify only the LCMClusterState items names that start with the swarm-drain- and cordon-drain- prefix.

  5. Verify the status of each LCMClusterState item of the swarm-drain and cordon-drain type:

    kubectl -n <clusterProjectName>  get lcmlusterstates <lcmlusterstatesItemNameOfSwarmDrainOrCordonDrainType> -o=yaml
    

    Example of system response extract for the LCMClusterState items of the cordon-drain type:

    spec:
     arg: kaas-node-4c026e7a-8acd-48b2-bf5c-cdeaf99d812f
     clusterName: test-child-namespace
     type: cordon-drain
     value: "false"
    status:
      attempt: 0
      value: "false"
    

    Example of system response extract for the LCMClusterState items of the swarm-drain type:

    spec:
      arg: kaas-node-4c026e7a-8acd-48b2-bf5c-cdeaf99d812f
      clusterName: test-child-namespace
      type: swarm-drain
      value: "true"
    status:
      attempt: 334
      message: 'Error: waiting for kubernetes node kaas-node-4c026e7a-8acd-48b2-bf5c-cdeaf99d812f
        to be drained first'
    

    The cluster is affected if:

    • For cordon-drain, spec.value and status.value are "false"

    • For swarm-drain, spec.value is "true" and the status.message contains an error related to waiting for the Kubernetes cordon-drain to finish

Workaround:

For each LCMClusterState item of the swarm-drain type with spec.value == "true" and the status.message described above, replace "true" with "false" in spec.value:

kubectl -n <clusterProjectName> edit lcmclusterstate <lcmlusterstatesItemNameOfSwarmDrainType>


Upgrade
[13292] Local volume provisioner pod stuck in Terminating status after upgrade

After upgrade of Container Cloud from 2.6.0 to 2.7.0, the local volume provisioner pod in the default project is stuck in the Terminating status, even after upgrade to 2.8.0.

This issue does not affect functioning of the management, regional, or managed clusters. The issue does not prevent the successful upgrade of the cluster.

Workaround:

  1. Verify that the cluster is affected:

    kubectl get pods -n default | grep local-volume-provisioner
    

    If the output contains a pod with the Terminating status, the cluster is affected.

    Capture the affected pod name, if any.

  2. Delete the affected pod:

    kuebctl -n default delete pod <LVPPodName> --force
    
[9899] Helm releases get stuck in PENDING_UPGRADE during cluster update

Fixed in 2.14.0

Helm releases may get stuck in the PENDING_UPGRADE status during a management or managed cluster upgrade. The HelmBundle Controller cannot recover from this state and requires manual actions. The workaround below describes the recovery process for the openstack-operator release that stuck during a managed cluster update. Use it as an example for other Helm releases as required.

Workaround:

  1. Log in to the helm-controller pod console:

    kubectl exec -n kube-system -it helm-controller-0 sh -c tiller
    
  2. Identify the release that stuck in the PENDING_UPGRADE status. For example:

    ./helm --host=localhost:44134 history openstack-operator
    

    Example of system response:

    REVISION  UPDATED                   STATUS           CHART                      DESCRIPTION
    1         Tue Dec 15 12:30:41 2020  SUPERSEDED       openstack-operator-0.3.9   Install complete
    2         Tue Dec 15 12:32:05 2020  SUPERSEDED       openstack-operator-0.3.9   Upgrade complete
    3         Tue Dec 15 16:24:47 2020  PENDING_UPGRADE  openstack-operator-0.3.18  Preparing upgrade
    
  3. Roll back the failed release to the previous revision:

    1. Download the Helm v3 binary. For details, see official Helm documentation.

    2. Roll back the failed release:

      helm rollback <failed-release-name>
      

      For example:

      helm rollback openstack-operator 2
      

    Once done, the release will be reconciled.


[14152] Managed cluster upgrade fails due to DNS issues

Fixed in 2.10.0

A managed cluster release upgrade may fail due to DNS issues on pods with host networking. If this is the case, the DNS names of the Kubernetes services on the affected pod cannot be resolved.

Workaround:

  1. Export kubeconfig of the affected managed cluster. For example:

    export KUBECONFIG=~/Downloads/kubeconfig-test-cluster.yml
    
  2. Identify any existing pod with host networking. For example, tf-config-xxxxxx:

    kubectl get pods -n tf -l app=tf-config
    
  3. Verify the DNS names resolution of the Kubernetes services from this pod. For example:

    kubectl -n tf exec -it tf-config-vl4mh -c svc-monitor -- curl -k https://kubernetes.default.svc
    

    The system output must not contain DNS errors.

  4. If the DNS name cannot be resolved, restart all calico-node pods:

    kubectl delete pods -l k8s-app=calico-node -n kube-system
    


Container Cloud web UI
[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.8.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.20.2

aws-credentials-controller

1.20.2

Bare metal

baremetal-operator Updated

4.1.3

baremetal-public-api Updated

4.1.3

baremetal-provider Updated

1.20.2

httpd

1.18.0

ironic Updated

victoria-bionic-20210408180013

ironic-operator Updated

base-bionic-20210409133604

kaas-ipam Updated

base-bionic-20210427213631

local-volume-provisioner

1.0.5-mcp

mariadb

10.4.17-bionic-20210203155435

IAM

iam Updated

2.3.2

iam-controller Updated

1.20.2

keycloak

12.0.0

Container Cloud Updated

admission-controller

1.20.2

byo-credentials-controller

1.20.2

byo-provider

1.20.2

kaas-public-api

1.20.2

kaas-exporter

1.20.2

kaas-ui

1.20.2

lcm-controller

0.2.0-327-g5676f4e3

mcc-cache

1.20.2

proxy-controller

1.20.2

release-controller

1.20.2

rhellicense-controller

1.20.2

squid-proxy

0.0.1-3

OpenStack Updated

openstack-provider

1.20.2

os-credentials-controller

1.20.2

VMware vSphere Updated

vsphere-provider

1.20.2

vsphere-credentials-controller

1.20.2

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.8.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-4.1.3.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-4.1.3.tgz

ironic-python-agent-bionic.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-victoria-bionic-debug-20210226182519

ironic-python-agent-bionic.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-victoria-bionic-debug-20210226182519

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-4.1.3.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-1.0.5-mcp.tgz

Docker images

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-bionic-20210317164614

httpd

mirantis.azurecr.io/lcm/nginx:1.18.0

ironic Updated

mirantis.azurecr.io/openstack/ironic:victoria-bionic-20210408180013

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:victoria-bionic-20210408180013

ironic-operator Updated

mirantis.azurecr.io/bm/ironic-operator:base-bionic-20210409133604

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-bionic-20210427213631

mariadb

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20210203155435


Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.20.2.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.20.2.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.20.2.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.20.2.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.20.2.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.20.2.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.20.2.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.20.2.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.20.2.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.20.2.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.20.2.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.20.2.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.20.2.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.20.2.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.20.2.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.20.2.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.20.2.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.20.2.tgz

rhellicense-controller

https://binary.mirantis.com/core/helm/rhellicense-controller-1.20.2.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.20.2.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.20.2.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.20.2.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.20.2

aws-cluster-api-controller Updated

mirantis.azurecr.io/core/aws-cluster-api-controller:1.20.2

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.20.2

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.20.2

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.20.2

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.20.2

frontend Updated

mirantis.azurecr.io/core/frontend:1.20.2

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.20.2

kproxy Updated

mirantis.azurecr.io/lcm/kproxy:1.20.2

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:v0.2.0-327-g5676f4e3

nginx

mirantis.azurecr.io/lcm/nginx:1.18.0

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.20.2

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.20.2

registry

mirantis.azurecr.io/lcm/registry:2.7.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.20.2

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.20.2

squid-proxy Updated

mirantis.azurecr.io/core/squid-proxy:0.0.1-3

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-api-controller:1.20.2

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.20.2


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

iamctl-linux Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.1-linux

iamctl-darwin Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.1-darwin

iamctl-windows Updated

http://binary.mirantis.com/iam/bin/iamctl-0.5.1-windows

Helm charts Updated

iam

http://binary.mirantis.com/iam/helm/iam-2.3.2.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

keycloak-proxy Updated

http://binary.mirantis.com/core/helm/keycloak_proxy-1.20.2.tgz

Docker images

api Updated

mirantis.azurecr.io/iam/api:0.5.1

auxiliary Updated

mirantis.azurecr.io/iam/auxiliary:0.5.1

kubernetes-entrypoint Updated

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.0-20200311160233

mariadb

mirantis.azurecr.io/general/mariadb:10.4.16-bionic-20201105025052

keycloak

mirantis.azurecr.io/iam/keycloak:0.4.0

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:6.0.1

2.7.0

The Mirantis Container Cloud GA release 2.7.0:

  • Introduces support for the Cluster release 5.14.0 that is based on Kubernetes 1.18, Mirantis Container Runtime 19.03.14, and Mirantis Kubernetes Engine 3.3.6.

  • Supports the Cluster release 6.14.0 that is based on the Cluster release 5.14.0 and represents Mirantis OpenStack for Kubernetes (MOS) 21.2.

  • Supports deprecated Cluster releases 5.13.0 and 6.12.0 that will become unsupported in one of the following Container Cloud releases.

  • Supports the Cluster release 5.11.0 only for attachment of existing MKE 3.3.4 clusters. For the deployment of new or attachment of existing MKE 3.3.6 clusters, the latest available Cluster release is used.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.7.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.7.0. For the list of enhancements in the Cluster release 5.14.0 and Cluster release 6.14.0 that are supported by the Container Cloud release 2.7.0, see the 5.14.0 and 6.14.0 sections.


Full support for the VMware vSphere provider

Introduced general availability support for the VMware vSphere provider after completing full integration of the vSphere provider on RHEL with Container Cloud.

During the Container Cloud 2.6.0 - 2.7.0 release cycle, added the following improvements:

  • Removed the StackLight limitations

  • Completed the integration of proxy support for the vSphere-based managed clusters

  • Completed the integration of the non-DHCP support for regional clusters

  • Addressed a number of critical and major issues

Universal SSH user

Implemented a universal SSH user mcc-user to replace the existing default SSH user names. The mcc-user user name is applicable to any Container Cloud provider and node type, including Bastion.

The existing SSH user names are deprecated as of Container Cloud 2.7.0. SSH keys will be managed only for mcc-user as of one of the following Container Cloud releases.

Configuration of SSH keys on existing clusters using web UI

Implemented the possibility to configure SSH keys on existing clusters using the Container Cloud web UI. You can now add or remove SSH keys on running managed clusters using the Configure cluster web UI menu.

After the update of your Cluster release to the latest version supported by 2.7.0 for the OpenStack and AWS-based managed clusters, a one-time redeployment of the Bastion node is required to apply the first configuration change of SSH keys. For this purpose, the Allow Bastion Redeploy one-time check box is added to the Configure Cluster wizard in the Container Cloud web UI.

Note

After the Bastion node redeploys on the AWS-based clusters, its public IP address changes.

Cluster and machines live statuses in web UI

Implemented the possibility to monitor live status of a cluster and machine deployment or update using the Container Cloud web UI. You can now follow the deployment readiness and health of essential cluster components, such as Helm, Kubernetes, kubelet, Swarm, OIDC, StackLight, and others. For machines, you can monitor nodes readiness reported by kubelet and nodes health reported by Swarm.

Enabling of proxy access using web UI for vSphere, AWS, and bare metal

Extended the Container Cloud web UI with the parameters that enable proxy access on managed clusters for the remaining cloud providers: vSphere, AWS, and bare metal.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.7.0 along with the Cluster releases 5.14.0 and 6.14.0:

  • [13176] [vSphere] Fixed the issue with the cluster network settings related to IPAM disappearing from the cluster provider spec and leading to invalid metadata provided to virtual machines.

  • [12683] [vSphere] Fixed the issue with the kaas-ipam pods being installed and continuously restarted even if IPAM was disabled on the vSphere-based regional cluster deployed on top of an AWS-based management cluster.


  • [12305] [Ceph] Fixed the issue with inability to define the CRUSH map rules through the KaaSCephCluster custom resource. For details, see Operations Guide: Ceph advanced configuration.

  • [10060] [Ceph] Fixed the issue with a Ceph OSD node removal not being triggered properly and failing after updating the KaasCephCluster custom resource (CR).


  • [13078] [StackLight] Fixed the issue with Elasticsearch not receiving data from Fluentd due to the limit of open index shards per node.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.7.0 including the Cluster release 5.14.0 and 6.14.0.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


AWS
[8013] Managed cluster deployment requiring PVs may fail

Fixed in the Cluster release 7.0.0

Note

The issue below affects only the Kubernetes 1.18 deployments. Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

On a management cluster with multiple AWS-based managed clusters, some clusters fail to complete the deployments that require persistent volumes (PVs), for example, Elasticsearch. Some of the affected pods get stuck in the Pending state with the pod has unbound immediate PersistentVolumeClaims and node(s) had volume node affinity conflict errors.

Warning

The workaround below applies to HA deployments where data can be rebuilt from replicas. If you have a non-HA deployment, back up any existing data before proceeding, since all data will be lost while applying the workaround.

Workaround:

  1. Obtain the persistent volume claims related to the storage mounts of the affected pods:

    kubectl get pod/<pod_name1> pod/<pod_name2> \
    -o jsonpath='{.spec.volumes[?(@.persistentVolumeClaim)].persistentVolumeClaim.claimName}'
    

    Note

    In the command above and in the subsequent steps, substitute the parameters enclosed in angle brackets with the corresponding values.

  2. Delete the affected Pods and PersistentVolumeClaims to reschedule them: For example, for StackLight:

    kubectl -n stacklight delete \
    
      pod/<pod_name1> pod/<pod_name2> ...
      pvc/<pvc_name2> pvc/<pvc_name2> ...
    


vSphere
[14458] Failure to create a container for pod: cannot allocate memory

Fixed in 2.9.0 for new clusters

Newly created pods may fail to run and have the CrashLoopBackOff status on long-living Container Cloud clusters deployed on RHEL 7.8 using the VMware vSphere provider. The following is an example output of the kubectl describe pod <pod-name> -n <projectName> command:

State:        Waiting
Reason:       CrashLoopBackOff
Last State:   Terminated
Reason:       ContainerCannotRun
Message:      OCI runtime create failed: container_linux.go:349:
              starting container process caused "process_linux.go:297:
              applying cgroup configuration for process caused
              "mkdir /sys/fs/cgroup/memory/kubepods/burstable/<pod-id>/<container-id>>:
              cannot allocate memory": unknown

The issue occurs due to the Kubernetes and Docker community issues.

According to the RedHat solution, the workaround is to disable the kernel memory accounting feature by appending cgroup.memory=nokmem to the kernel command line.

Note

The workaround below applies to the existing clusters only. The issue is resolved for new Container Cloud 2.9.0 deployments since the workaround below automatically applies to the VM template built during the vSphere-based management cluster bootstrap.

Apply the following workaround on each machine of the affected cluster.

Workaround

  1. SSH to any machine of the affected cluster using mcc-user and the SSH key provided during the cluster creation to proceed as the root user.

  2. In /etc/default/grub, set cgroup.memory=nokmem for GRUB_CMDLINE_LINUX.

  3. Update kernel:

    yum install kernel kernel-headers kernel-tools kernel-tools-libs kexec-tools
    
  4. Update the grub configuration:

    grub2-mkconfig -o /boot/grub2/grub.cfg
    
  5. Reboot the machine.

  6. Wait for the machine to become available.

  7. Wait for 5 minutes for Docker and Kubernetes services to start.

  8. Verify that the machine is Ready:

    docker node ls
    kubectl get nodes
    
  9. Repeat the steps above on the remaining machines of the affected cluster.



OpenStack
[10424] Regional cluster cleanup fails by timeout

An OpenStack-based regional cluster cleanup fails with the timeout error.

Workaround:

  1. Wait for the Cluster object to be deleted in the bootstrap cluster:

    kubectl --kubeconfig <(./bin/kind get kubeconfig --name clusterapi) get cluster
    

    The system output must be empty.

  2. Remove the bootstrap cluster manually:

    ./bin/kind delete cluster --name clusterapi
    


Bare metal
[7655] Wrong status for an incorrectly configured L2 template

Fixed in 2.11.0

If an L2 template is configured incorrectly, a bare metal cluster is deployed successfully but with the runtime errors in the IpamHost object.

Workaround:

If you suspect that the machine is not working properly because of incorrect network configuration, verify the status of the corresponding IpamHost object. Inspect the l2RenderResult and ipAllocationResult object fields for error messages.



Storage
[7073] Cannot automatically remove a Ceph node

Fixed in 2.16.0

When removing a worker node, it is not possible to automatically remove a Ceph node. The workaround is to manually remove the Ceph node from the Ceph cluster as described in Operations Guide: Add, remove, or reconfigure Ceph nodes before removing the worker node from your deployment.

[10050] Ceph OSD pod is in the CrashLoopBackOff state after disk replacement

Fixed in 2.11.0

If you use a custom BareMetalHostProfile, after disk replacement on a Ceph OSD, the Ceph OSD pod switches to the CrashLoopBackOff state due to the Ceph OSD authorization key failing to be created properly.

Workaround:

  1. Export kubeconfig of your managed cluster. For example:

    export KUBECONFIG=~/Downloads/kubeconfig-test-cluster.yml
    
  2. Log in to the ceph-tools pod:

    kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
    
  3. Delete the authorization key for the failed Ceph OSD:

    ceph auth del osd.<ID>
    
  4. SSH to the node on which the Ceph OSD cannot be created.

  5. Clean up the disk that will be a base for the failed Ceph OSD. For details, see official Rook documentation.

    Note

    Ignore failures of the sgdisk --zap-all $DISK and blkdiscard $DISK commands if any.

  6. On the managed cluster, restart Rook Operator:

    kubectl -n rook-ceph delete pod -l app=rook-ceph-operator
    

[12723] ceph_role_* labels remain after deleting a node from KaaSCephCluster

Fixed in 2.8.0

The ceph_role_mon and ceph_role_mgr labels that Ceph Controller assigns to a node during a Ceph cluster creation are not automatically removed after deleting a node from KaaSCephCluster.

As a workaround, manually remove the labels using the following commands:

kubectl unlabel node <nodeName> ceph_role_mon
kubectl unlabel node <nodeName> ceph_role_mgr

IAM
[13385] MariaDB pods fail to start after SST sync

Fixed in 2.12.0

The MariaDB pods fail to start after MariaDB blocks itself during the State Snapshot Transfers sync.

Workaround:

  1. Verify the failed pod readiness:

    kubectl describe pod -n kaas <failedMariadbPodName>
    

    If the readiness probe failed with the WSREP not synced message, proceed to the next step. Otherwise, assess the MariaDB pod logs to identify the failure root cause.

  2. Obtain the MariaDB admin password:

    kubectl get secret -n kaas mariadb-dbadmin-password -o jsonpath='{.data.MYSQL_DBADMIN_PASSWORD}' | base64 -d ; echo
    
  3. Verify that wsrep_local_state_comment is Donor or Desynced:

    kubectl exec -it -n kaas <failedMariadbPodName> -- mysql -uroot -p<mariadbAdminPassword> -e "SHOW status LIKE \"wsrep_local_state_comment\";"
    
  4. Restart the failed pod:

    kubectl delete pod -n kaas <failedMariadbPodName>
    


LCM
[13845] Cluster update fails during the LCM Agent upgrade with x509 error

Fixed in 2.11.0

During update of a managed cluster from the Cluster releases 6.12.0 to 6.14.0, the LCM Agent upgrade fails with the following error in logs:

lcmAgentUpgradeStatus:
    error: 'failed to download agent binary: Get https://<mcc-cache-address>/bin/lcm/bin/lcm-agent/v0.2.0-289-gd7e9fa9c/lcm-agent:
      x509: certificate signed by unknown authority'

Only clusters initially deployed using Container Cloud 2.4.0 or earlier are affected.

As a workaround, restart lcm-agent using the service lcm-agent-* restart command on the affected nodes.


[13381] Management and regional clusters with enabled proxy are unreachable

Fixed in 2.8.0

After bootstrap, requests to apiserver fail on the management and regional clusters with enabled proxy.

As a workaround, before running bootstrap.sh, add the entire range of IP addresses that will be used for floating IPs to the NO_PROXY environment variable.

[13402] Cluster fails with error: no space left on device

Fixed in 2.8.0 for new clusters and in 2.10.0 for existing clusters

If an application running on a Container Cloud management or managed cluster fails frequently, for example, PostgreSQL, it may produce an excessive amount of core dumps. This leads to the no space left on device error on the cluster nodes and, as a result, to the broken Docker Swarm and the entire cluster.

Core dumps are disabled by default on the operating system of the Container Cloud nodes. But since Docker does not inherit the operating system settings, disable core dumps in Docker using the workaround below.

Warning

The workaround below does not apply to the baremetal-based clusters, including MOS deployments, since Docker restart may destroy the Ceph cluster.

Workaround:

  1. SSH to any machine of the affected cluster using mcc-user and the SSH key provided during the cluster creation.

  2. In /etc/docker/daemon.json, add the following parameters:

    {
        ...
        "default-ulimits": {
            "core": {
                "Hard": 0,
                "Name": "core",
                "Soft": 0
            }
        }
    }
    
  3. Restart the Docker daemon:

    systemctl restart docker
    
  4. Repeat the steps above on each machine of the affected cluster one by one.


[8112] Nodes occasionally become Not Ready on long-running clusters

On long-running Container Cloud clusters, one or more nodes may occasionally become Not Ready with different errors in the ucp-kubelet containers of failed nodes.

As a workaround, restart ucp-kubelet on the failed node:

ctr -n com.docker.ucp snapshot rm ucp-kubelet
docker rm -f ucp-kubelet

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

[10029] Authentication fails with the 401 Unauthorized error

Authentication may not work on some controller nodes after a managed cluster creation. As a result, the Kubernetes API operations with the managed cluster kubeconfig fail with Response Status: 401 Unauthorized.

As a workaround, manually restart the ucp-controller and ucp-auth Docker services on the affected node.

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

[6066] Helm releases get stuck in FAILED or UNKNOWN state

Note

The issue affects only Helm v2 releases and is addressed for Helm v3. Starting from Container Cloud 2.19.0, all Helm releases are switched to v3.

During a management, regional, or managed cluster deployment, Helm releases may get stuck in the FAILED or UNKNOWN state although the corresponding machines statuses are Ready in the Container Cloud web UI. For example, if the StackLight Helm release fails, the links to its endpoints are grayed out in the web UI. In the cluster status, providerStatus.helm.ready and providerStatus.helm.releaseStatuses.<releaseName>.success are false.

HelmBundle cannot recover from such states and requires manual actions. The workaround below describes the recovery steps for the stacklight release that got stuck during a cluster deployment. Use this procedure as an example for other Helm releases as required.

Workaround:

  1. Verify the failed release has the UNKNOWN or FAILED status in the HelmBundle object:

    kubectl --kubeconfig <regionalClusterKubeconfigPath> get helmbundle <clusterName> -n <clusterProjectName> -o=jsonpath={.status.releaseStatuses.stacklight}
    
    In the command above and in the steps below, replace the parameters
    enclosed in angle brackets with the corresponding values of your cluster.
    

    Example of system response:

    stacklight:
    attempt: 2
    chart: ""
    finishedAt: "2021-02-05T09:41:05Z"
    hash: e314df5061bd238ac5f060effdb55e5b47948a99460c02c2211ba7cb9aadd623
    message: '[{"occurrence":1,"lastOccurrenceDate":"2021-02-05 09:41:05","content":"error
      updating the release: rpc error: code = Unknown desc = customresourcedefinitions.apiextensions.k8s.io
      \"helmbundles.lcm.mirantis.com\" already exists"}]'
    notes: ""
    status: UNKNOWN
    success: false
    version: 0.1.2-mcp-398
    
  2. Log in to the helm-controller pod console:

    kubectl --kubeconfig <affectedClusterKubeconfigPath> exec -n kube-system -it helm-controller-0 sh -c tiller
    
  3. Download the Helm v3 binary. For details, see official Helm documentation.

  4. Remove the failed release:

    helm delete <failed-release-name>
    

    For example:

    helm delete stacklight
    

    Once done, the release triggers for redeployment.



Upgrade
[13292] Local volume provisioner pod stuck in Terminating status after upgrade

After upgrade of Container Cloud from 2.6.0 to 2.7.0, the local volume provisioner pod in the default project is stuck in the Terminating status, even after upgrade to 2.8.0.

This issue does not affect functioning of the management, regional, or managed clusters. The issue does not prevent the successful upgrade of the cluster.

Workaround:

  1. Verify that the cluster is affected:

    kubectl get pods -n default | grep local-volume-provisioner
    

    If the output contains a pod with the Terminating status, the cluster is affected.

    Capture the affected pod name, if any.

  2. Delete the affected pod:

    kuebctl -n default delete pod <LVPPodName> --force
    
[9899] Helm releases get stuck in PENDING_UPGRADE during cluster update

Fixed in 2.14.0

Helm releases may get stuck in the PENDING_UPGRADE status during a management or managed cluster upgrade. The HelmBundle Controller cannot recover from this state and requires manual actions. The workaround below describes the recovery process for the openstack-operator release that stuck during a managed cluster update. Use it as an example for other Helm releases as required.

Workaround:

  1. Log in to the helm-controller pod console:

    kubectl exec -n kube-system -it helm-controller-0 sh -c tiller
    
  2. Identify the release that stuck in the PENDING_UPGRADE status. For example:

    ./helm --host=localhost:44134 history openstack-operator
    

    Example of system response:

    REVISION  UPDATED                   STATUS           CHART                      DESCRIPTION
    1         Tue Dec 15 12:30:41 2020  SUPERSEDED       openstack-operator-0.3.9   Install complete
    2         Tue Dec 15 12:32:05 2020  SUPERSEDED       openstack-operator-0.3.9   Upgrade complete
    3         Tue Dec 15 16:24:47 2020  PENDING_UPGRADE  openstack-operator-0.3.18  Preparing upgrade
    
  3. Roll back the failed release to the previous revision:

    1. Download the Helm v3 binary. For details, see official Helm documentation.

    2. Roll back the failed release:

      helm rollback <failed-release-name>
      

      For example:

      helm rollback openstack-operator 2
      

    Once done, the release will be reconciled.



Container Cloud web UI
[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.7.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.19.10

aws-credentials-controller

1.19.10

Bare metal

baremetal-operator Updated

4.0.7

baremetal-public-api Updated

4.0.7

baremetal-provider Updated

1.19.10

httpd

1.18.0

ironic

victoria-bionic-20210302180018

ironic-operator Updated

base-bionic-20210326130922

kaas-ipam Updated

base-bionic-20210329201651

local-volume-provisioner Updated

1.0.5-mcp

mariadb

10.4.17-bionic-20210203155435

IAM

iam Updated

2.2.0

iam-controller Updated

1.19.10

keycloak

9.0.0

Container Cloud

admission-controller Updated

1.19.10

byo-credentials-controller Updated

1.19.10

byo-provider Updated

1.19.10

kaas-public-api Updated

1.19.10

kaas-exporter Updated

1.19.10

kaas-ui Updated

1.19.10

lcm-controller Updated

0.2.0-299-g32c0398a

mcc-cache Updated

1.19.10

proxy-controller Updated

1.19.10

release-controller Updated

1.19.10

rhellicense-controller Updated

1.19.10

squid-proxy

0.0.1-1

OpenStack Updated

openstack-provider

1.19.10

os-credentials-controller

1.19.10

VMware vSphere Updated

vsphere-provider

1.19.10

vsphere-credentials-controller

1.19.10

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.7.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-4.0.7.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-4.0.7.tgz

ironic-python-agent-bionic.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-victoria-bionic-debug-20210226182519

ironic-python-agent-bionic.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-victoria-bionic-debug-20210226182519

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-4.0.7.tgz

local-volume-provisioner Updated

https://binary.mirantis.com/bm/helm/local-volume-provisioner-1.0.5-mcp.tgz

Docker images

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-bionic-20210317164614

httpd

mirantis.azurecr.io/lcm/nginx:1.18.0

ironic

mirantis.azurecr.io/openstack/ironic:victoria-bionic-20210302180018

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:victoria-bionic-20210302180018

ironic-operator

mirantis.azurecr.io/bm/ironic-operator:base-bionic-20210301104323

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-bionic-20210329201651

mariadb

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20210203155435


Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.19.10.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.19.10.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.19.10.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.19.10.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.19.10.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.19.10.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.19.10.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.19.10.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.19.10.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.19.10.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.19.10.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.19.10.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.19.10.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.19.10.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.19.10.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.19.10.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.19.10.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.19.10.tgz

rhellicense-controller Updated

https://binary.mirantis.com/core/helm/rhellicense-controller-1.19.10.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.19.10.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.19.10.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.19.10.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.19.10

aws-cluster-api-controller Updated

mirantis.azurecr.io/core/aws-cluster-api-controller:1.19.10

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.19.10

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.19.10

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.19.10

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.19.10

frontend Updated

mirantis.azurecr.io/core/frontend:1.19.10

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.19.10

kproxy Updated

mirantis.azurecr.io/lcm/kproxy:1.19.10

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:v0.2.0-299-g32c0398a

nginx

mirantis.azurecr.io/lcm/nginx:1.18.0

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.19.10

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.19.10

registry

mirantis.azurecr.io/lcm/registry:2.7.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.19.10

rhellicense-controller Updated

mirantis.azurecr.io/core/rhellicense-controller:1.19.10

squid-proxy

mirantis.azurecr.io/core/squid-proxy:0.0.1-1

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-api-controller:1.19.10

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.19.10


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

iamctl-linux

http://binary.mirantis.com/iam/bin/iamctl-0.4.0-linux

iamctl-darwin

http://binary.mirantis.com/iam/bin/iamctl-0.4.0-darwin

iamctl-windows

http://binary.mirantis.com/iam/bin/iamctl-0.4.0-windows

Helm charts Updated

iam

http://binary.mirantis.com/iam/helm/iam-2.2.0tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

keycloak-proxy Updated

http://binary.mirantis.com/core/helm/keycloak_proxy-1.19.10.tgz

Docker images

api

mirantis.azurecr.io/iam/api:0.4.0

auxiliary

mirantis.azurecr.io/iam/auxiliary:0.4.0

kubernetes-entrypoint

mirantis.azurecr.io/iam/external/kubernetes-entrypoint:v0.3.1

mariadb

mirantis.azurecr.io/general/mariadb:10.4.16-bionic-20201105025052

keycloak

mirantis.azurecr.io/iam/keycloak:0.4.0

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:6.0.1

2.6.0

The Mirantis Container Cloud GA release 2.6.0:

  • Introduces support for the Cluster release 5.13.0 that is based on Kubernetes 1.18, Mirantis Container Runtime 19.03.14, and Mirantis Kubernetes Engine 3.3.6.

  • Supports the Cluster release 6.12.0 that is based on the Cluster release 5.12.0 and represents Mirantis OpenStack for Kubernetes (MOS) 21.1.

  • Still supports deprecated Cluster releases 5.12.0 and 6.10.0 that will become unsupported in one of the following Container Cloud releases.

  • Supports the Cluster release 5.11.0 only for attachment of existing MKE 3.3.4 clusters. For the deployment of new or attachment of existing MKE 3.3.6 clusters, the latest available Cluster release is used.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.6.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.6.0. For the list of enhancements in the Cluster release 5.13.0 and Cluster release 6.12.0 that are supported by the Container Cloud release 2.6.0, see the 5.13.0 and 6.12.0 sections.


RHEL license activation using the activation key

Technology Preview

In the scope of Technology Preview support for the VMware vSphere cloud provider on RHEL, added an additional RHEL license activation method that uses the activation key through RedHat Customer Portal or RedHat Satellite server.

The Satellite configuration on the hosts is done by installing a specific pre-generated RPM package from the Satellite package URL provided by the user through API. The activation key is provided by the user through API.

Along with the new activation method, you can still use the existing method that is adding of your RHEL subscription with the user name and password of your RedHat Customer Portal account associated with your RHEL license for Virtual Datacenters.

Support for VMware vSphere Distributed Switch

Technology Preview

In the scope of Technology Preview support for the VMware vSphere cloud provider on RHEL, added support for VMware vSphere Distributed Switch (VDS) to provide networking to the vSphere virtual machines. This is an alternative to the vSphere Standard Switch with network on top of it. A VM is attached to a VDS port group. You can specify the path to the port group using the NetworkPath parameter in VsphereClusterProviderSpec.

VMware vSphere provider integration with IPAM controller

Technology Preview

In the scope of Technology Preview support for the VMware vSphere cloud provider on RHEL, enabled the vSphere provider to use IPAM controller to assign IP addresses to VMs automatically, without an external DHCP server. If the IPAM controller is not enabled in the bootstrap template, the vSphere provider must rely on external provisioning of the IP addresses by a DHCP server of the user infrastructure.

Proxy support for all Container Cloud providers

Extended proxy support by enabling the feature for the remaining supported AWS and bare metal cloud providers. If you require all Internet access to go through a proxy server for security and audit purposes, you can now bootstrap management and regional clusters of any cloud provider type using proxy.

You can also enable a separate proxy access on the OpenStack-based managed clusters using the Container Cloud web UI. This proxy is intended for the end user needs and is not used for a managed cluster deployment or for access to the Mirantis resources.

Caution

Enabling of proxy access using the Container Cloud web UI for the vSphere, AWS, and baremetal-based managed clusters is on the final development stage and will become available in the next release.

Updated documentation on the bare metal networking

Expanded and restructured the bare metal networking documentation that now contains the following subsections with a detailed description of every bare metal network type:

  • IPAM network

  • Management network

  • Cluster network

  • Host network

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.6.0 and the Cluster release 5.13.0:

  • [11302] [LCM] Fixed the issue with inability to delete a Container Cloud project with attached MKE clusters that failed to be cleaned up properly.

  • [11967] [LCM] Added vrrp_script chk_myscript to the Keepalived configuration to prevent issues with VIP (Virtual IP) pointing to a node with broken Kubernetes API.

  • [10491] [LCM] Fixed the issue with kubelet being randomly stuck, for example, after a management cluster upgrade. The fix enables automatic restart of kubelet in case of failures.

  • [7782] [bootstrap] Renamed the SSH key used during bootstrap for every cloud provider from openstack_tmp to an accurate and clear ssh_key.

  • [11927] [StackLight] Fixed the issue with StackLight failing to integrate with an external proxy with authentication handled by a proxy server and ignoring the HTTP Authorization header for basic authentication passed by Prometheus Alertmanager.

  • [11001] [StackLight] Fixed the issue with Patroni pod failing to start and remaining in the CrashLoopBackOff status after the management cluster update.

  • [10829] [IAM] Fixed the issue with the Keycloak pods failing to start during a management cluster bootstrap with the Failed to update database exception in logs.

  • [11468] [BM] Fixed the issue with the persistent volumes (PVs) that are created using local volume provisioner (LVP) not being mounted on the dedicated disk labeled as local-volume and using the root volume instead.

  • [9875] [BM] Fixed the issue with the bootstrap.sh preflight script failing with a timeout waiting for BareMetalHost if KAAS_BM_FULL_PREFLIGHT was enabled.

  • [11633] [vSphere] Fixed the issue with the vSphere-based managed cluster projects failing to be cleaned up because of stale secret(s) related to the RHEL license object(s).

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.6.0 including the Cluster release 5.13.0 and 6.12.0.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


AWS
[8013] Managed cluster deployment requiring PVs may fail

Fixed in the Cluster release 7.0.0

Note

The issue below affects only the Kubernetes 1.18 deployments. Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

On a management cluster with multiple AWS-based managed clusters, some clusters fail to complete the deployments that require persistent volumes (PVs), for example, Elasticsearch. Some of the affected pods get stuck in the Pending state with the pod has unbound immediate PersistentVolumeClaims and node(s) had volume node affinity conflict errors.

Warning

The workaround below applies to HA deployments where data can be rebuilt from replicas. If you have a non-HA deployment, back up any existing data before proceeding, since all data will be lost while applying the workaround.

Workaround:

  1. Obtain the persistent volume claims related to the storage mounts of the affected pods:

    kubectl get pod/<pod_name1> pod/<pod_name2> \
    -o jsonpath='{.spec.volumes[?(@.persistentVolumeClaim)].persistentVolumeClaim.claimName}'
    

    Note

    In the command above and in the subsequent steps, substitute the parameters enclosed in angle brackets with the corresponding values.

  2. Delete the affected Pods and PersistentVolumeClaims to reschedule them: For example, for StackLight:

    kubectl -n stacklight delete \
    
      pod/<pod_name1> pod/<pod_name2> ...
      pvc/<pvc_name2> pvc/<pvc_name2> ...
    


vSphere
[12683] The kaas-ipam pods restart on the vSphere region with IPAM disabled

Fixed in Container Cloud 2.7.0

Even though IPAM is disabled on the vSphere-based regional cluster deployed on top of an AWS-based management cluster, the regional cluster still has the kaas-ipam pods installed and continuously restarts them. In this case, the pods logs contain the following exemplary errors:

Waiting for CRDs. [baremetalhosts.metal3.io clusters.cluster.k8s.io machines.cluster.k8s.io
ipamhosts.ipam.mirantis.com ipaddrs.ipam.mirantis.com subnets.ipam.mirantis.com subnetpools.ipam.mirantis.com \
l2templates.ipam.mirantis.com] not found yet
E0318 11:58:21.067502  1 main.go:240] Fetch CRD list failed: \
Object 'Kind' is missing in 'unstructured object has no kind'

As a result, the KubePodCrashLooping StackLight alerts are firing in Alertmanager for kaas-ipam. Disregard these alerts.

[13176] ClusterNetwork settings may disappear from the cluster provider spec

Fixed in Container Cloud 2.7.0

A vSphere-based cluster with IPAM enabled may lose cluster network settings related to IPAM leading to invalid metadata provided to virtual machines. As a result, virtual machines can not obtain assigned IP addresses. The issue occurs during a management cluster bootstrap or a managed cluster creation.

Workaround:

  • If the management cluster with IPAM enabled is not deployed yet, follow the steps below before launching the bootstrap.sh script:

    1. Open kaas-bootstrap/releases/kaas/2.6.0.yaml for editing.

    2. Change the release-controller version from 1.18.1 to 1.18.3:

      - name: release-controller
        version: 1.18.3
        chart: kaas-release/release-controller
        namespace: kaas
        values:
          image:
            tag: 1.18.3
      

    Now, proceed with the management cluster bootstrap.

  • If the management cluster is already deployed, and you want to create a vSphere-based managed cluster with IPAM enabled:

    1. Log in to a local machine where your management or regional cluster kubeconfig is located and export it:

      export KUBECONFIG=kaas-bootstrap/kubeconfig
      
    2. Edit the kaasrelease object by updating the release-controller chart and image version from 1.18.1 to 1.18.3:

      kubectl edit  kaasrelease kaas-2-6-0
      
      - chart: kaas-release/release-controller
        name: release-controller
        namespace: kaas
        values:
          image:
            tag: 1.18.3
        version: 1.18.3
      
    3. Verify that the release-controller deployment is ready with 3/3 replicas:

      kubectl get deployment release-controller-release-controller -n kaas -o=jsonpath='{.status.readyReplicas}/{.status.replicas}'
      

    Now, you can deploy managed clusters with IPAM enabled.


Bare metal
[7655] Wrong status for an incorrectly configured L2 template

Fixed in 2.11.0

If an L2 template is configured incorrectly, a bare metal cluster is deployed successfully but with the runtime errors in the IpamHost object.

Workaround:

If you suspect that the machine is not working properly because of incorrect network configuration, verify the status of the corresponding IpamHost object. Inspect the l2RenderResult and ipAllocationResult object fields for error messages.



StackLight
[13078] Elasticsearch does not receive data from Fluentd

Fixed in Container Cloud 2.7.0

Elasticsearch may stop receiving new data from Fluentd. In such case, error messages similar to the following will be present in fluentd-elasticsearch logs:

ElasticsearchError error="400 - Rejected by Elasticsearch [error type]:
illegal_argument_exception [reason]: 'Validation Failed: 1: this action would
add [15] total shards, but this cluster currently has [2989]/[3000] maximum
shards open;'" location=nil tag="ucp-kubelet"

The workaround is to manually increase the limit of open index shards per node:

kubectl -n stacklight exec -ti elasticsearch-master-0 -- \
curl -XPUT -H "content-type: application/json" \
-d '{"persistent":{"cluster.max_shards_per_node": 20000}}' \
http://127.0.0.1:9200/_cluster/settings

Storage
[10060] Ceph OSD node removal fails

Fixed in Container Cloud 2.7.0

A Ceph node removal is not being triggered properly after updating the KaasCephCluster custom resource (CR). Both management and managed clusters are affected.

Workaround:

  1. Remove the parameters for a Ceph OSD from the KaasCephCluster CR as described in Operations Guide: Add, remove, or reconfigure Ceph nodes.

  2. Obtain the IDs of the osd and mon services that are located on the old node:

    1. Obtain the UID of the affected machine:

      kubectl get machine <CephOSDNodeName> -n <ManagedClusterProjectName> -o jsonpath='{.metadata.annotations.kaas\.mirantis\.com\/uid}'
      
    2. Export kubeconfig of your managed cluster. For example:

      export KUBECONFIG=~/Downloads/kubeconfig-test-cluster.yml
      
    3. Identify the pods IDs that run the osd and mon services:

      kubectl get pods -o wide -n rook-ceph | grep <affectedMachineUID> | grep -E "mon|osd"
      

      Example of the system response extract:

      rook-ceph-mon-c-7bbc5d757d-5bpws                              1/1  Running    1  6h1m
      rook-ceph-osd-2-58775d5568-5lklw                              1/1  Running    4  44h
      rook-ceph-osd-prepare-705ae6c647cfdac928c63b63e2e2e647-qn4m9  0/1  Completed  0  94s
      

      The pods IDs include the osd or mon services IDs. In the example system response above, the osd ID is 2 and the mon ID is c.

  3. Delete the deployments of the osd and mon services obtained in the previous step:

    kubectl delete deployment rook-ceph-osd(mon)-<ID> -n rook-ceph
    

    For example:

    kubectl delete deployment rook-ceph-mon-c -n rook-ceph
    kubectl delete deployment rook-ceph-osd-2 -n rook-ceph
    
  4. Log in to the ceph-tools pod:

    kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
    
  5. Rebalance the Ceph OSDs:

    ceph osd out osd(s).ID
    

    Wait for the rebalance to complete.

  6. Rebalance the Ceph data:

    ceph osd purge osd(s).ID
    

    Wait for the Ceph data to rebalance.

  7. Remove the old node from the Ceph OSD tree:

    ceph osd crush rm <NodeName>
    
  8. If the removed node contained mon services, remove them:

    ceph mon rm <monID>
    
[7073] Cannot automatically remove a Ceph node

Fixed in 2.16.0

When removing a worker node, it is not possible to automatically remove a Ceph node. The workaround is to manually remove the Ceph node from the Ceph cluster as described in Operations Guide: Add, remove, or reconfigure Ceph nodes before removing the worker node from your deployment.

[10050] Ceph OSD pod is in the CrashLoopBackOff state after disk replacement

Fixed in 2.11.0

If you use a custom BareMetalHostProfile, after disk replacement on a Ceph OSD, the Ceph OSD pod switches to the CrashLoopBackOff state due to the Ceph OSD authorization key failing to be created properly.

Workaround:

  1. Export kubeconfig of your managed cluster. For example:

    export KUBECONFIG=~/Downloads/kubeconfig-test-cluster.yml
    
  2. Log in to the ceph-tools pod:

    kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
    
  3. Delete the authorization key for the failed Ceph OSD:

    ceph auth del osd.<ID>
    
  4. SSH to the node on which the Ceph OSD cannot be created.

  5. Clean up the disk that will be a base for the failed Ceph OSD. For details, see official Rook documentation.

    Note

    Ignore failures of the sgdisk --zap-all $DISK and blkdiscard $DISK commands if any.

  6. On the managed cluster, restart Rook Operator:

    kubectl -n rook-ceph delete pod -l app=rook-ceph-operator
    

[12723] ceph_role_* labels remain after deleting a node from KaaSCephCluster

Fixed in 2.8.0

The ceph_role_mon and ceph_role_mgr labels that Ceph Controller assigns to a node during a Ceph cluster creation are not automatically removed after deleting a node from KaaSCephCluster.

As a workaround, manually remove the labels using the following commands:

kubectl unlabel node <nodeName> ceph_role_mon
kubectl unlabel node <nodeName> ceph_role_mgr

LCM
[13402] Cluster fails with error: no space left on device

Fixed in 2.8.0 for new clusters and in 2.10.0 for existing clusters

If an application running on a Container Cloud management or managed cluster fails frequently, for example, PostgreSQL, it may produce an excessive amount of core dumps. This leads to the no space left on device error on the cluster nodes and, as a result, to the broken Docker Swarm and the entire cluster.

Core dumps are disabled by default on the operating system of the Container Cloud nodes. But since Docker does not inherit the operating system settings, disable core dumps in Docker using the workaround below.

Warning

The workaround below does not apply to the baremetal-based clusters, including MOS deployments, since Docker restart may destroy the Ceph cluster.

Workaround:

  1. SSH to any machine of the affected cluster using mcc-user and the SSH key provided during the cluster creation.

  2. In /etc/docker/daemon.json, add the following parameters:

    {
        ...
        "default-ulimits": {
            "core": {
                "Hard": 0,
                "Name": "core",
                "Soft": 0
            }
        }
    }
    
  3. Restart the Docker daemon:

    systemctl restart docker
    
  4. Repeat the steps above on each machine of the affected cluster one by one.


[10029] Authentication fails with the 401 Unauthorized error

Authentication may not work on some controller nodes after a managed cluster creation. As a result, the Kubernetes API operations with the managed cluster kubeconfig fail with Response Status: 401 Unauthorized.

As a workaround, manually restart the ucp-controller and ucp-auth Docker services on the affected node.

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

[6066] Helm releases get stuck in FAILED or UNKNOWN state

Note

The issue affects only Helm v2 releases and is addressed for Helm v3. Starting from Container Cloud 2.19.0, all Helm releases are switched to v3.

During a management, regional, or managed cluster deployment, Helm releases may get stuck in the FAILED or UNKNOWN state although the corresponding machines statuses are Ready in the Container Cloud web UI. For example, if the StackLight Helm release fails, the links to its endpoints are grayed out in the web UI. In the cluster status, providerStatus.helm.ready and providerStatus.helm.releaseStatuses.<releaseName>.success are false.

HelmBundle cannot recover from such states and requires manual actions. The workaround below describes the recovery steps for the stacklight release that got stuck during a cluster deployment. Use this procedure as an example for other Helm releases as required.

Workaround:

  1. Verify the failed release has the UNKNOWN or FAILED status in the HelmBundle object:

    kubectl --kubeconfig <regionalClusterKubeconfigPath> get helmbundle <clusterName> -n <clusterProjectName> -o=jsonpath={.status.releaseStatuses.stacklight}
    
    In the command above and in the steps below, replace the parameters
    enclosed in angle brackets with the corresponding values of your cluster.
    

    Example of system response:

    stacklight:
    attempt: 2
    chart: ""
    finishedAt: "2021-02-05T09:41:05Z"
    hash: e314df5061bd238ac5f060effdb55e5b47948a99460c02c2211ba7cb9aadd623
    message: '[{"occurrence":1,"lastOccurrenceDate":"2021-02-05 09:41:05","content":"error
      updating the release: rpc error: code = Unknown desc = customresourcedefinitions.apiextensions.k8s.io
      \"helmbundles.lcm.mirantis.com\" already exists"}]'
    notes: ""
    status: UNKNOWN
    success: false
    version: 0.1.2-mcp-398
    
  2. Log in to the helm-controller pod console:

    kubectl --kubeconfig <affectedClusterKubeconfigPath> exec -n kube-system -it helm-controller-0 sh -c tiller
    
  3. Download the Helm v3 binary. For details, see official Helm documentation.

  4. Remove the failed release:

    helm delete <failed-release-name>
    

    For example:

    helm delete stacklight
    

    Once done, the release triggers for redeployment.



Management and regional clusters
[9899] Helm releases get stuck in PENDING_UPGRADE during cluster update

Fixed in 2.14.0

Helm releases may get stuck in the PENDING_UPGRADE status during a management or managed cluster upgrade. The HelmBundle Controller cannot recover from this state and requires manual actions. The workaround below describes the recovery process for the openstack-operator release that stuck during a managed cluster update. Use it as an example for other Helm releases as required.

Workaround:

  1. Log in to the helm-controller pod console:

    kubectl exec -n kube-system -it helm-controller-0 sh -c tiller
    
  2. Identify the release that stuck in the PENDING_UPGRADE status. For example:

    ./helm --host=localhost:44134 history openstack-operator
    

    Example of system response:

    REVISION  UPDATED                   STATUS           CHART                      DESCRIPTION
    1         Tue Dec 15 12:30:41 2020  SUPERSEDED       openstack-operator-0.3.9   Install complete
    2         Tue Dec 15 12:32:05 2020  SUPERSEDED       openstack-operator-0.3.9   Upgrade complete
    3         Tue Dec 15 16:24:47 2020  PENDING_UPGRADE  openstack-operator-0.3.18  Preparing upgrade
    
  3. Roll back the failed release to the previous revision:

    1. Download the Helm v3 binary. For details, see official Helm documentation.

    2. Roll back the failed release:

      helm rollback <failed-release-name>
      

      For example:

      helm rollback openstack-operator 2
      

    Once done, the release will be reconciled.


[10424] Regional cluster cleanup fails by timeout

An OpenStack-based regional cluster cleanup fails with the timeout error.

Workaround:

  1. Wait for the Cluster object to be deleted in the bootstrap cluster:

    kubectl --kubeconfig <(./bin/kind get kubeconfig --name clusterapi) get cluster
    

    The system output must be empty.

  2. Remove the bootstrap cluster manually:

    ./bin/kind delete cluster --name clusterapi
    


Container Cloud web UI
[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.6.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.18.4

aws-credentials-controller

1.18.1

Bare metal

baremetal-operator Updated

4.0.4

baremetal-public-api Updated

4.0.4

baremetal-provider Updated

1.18.6

httpd

1.18.0

ironic Updated

victoria-bionic-20210302180018

ironic-operator Updated

base-bionic-20210301104323

kaas-ipam Updated

base-bionic-20210304134548

local-volume-provisioner

1.0.4-mcp

mariadb

10.4.17-bionic-20210203155435

IAM

iam Updated

2.0.0

iam-controller Updated

1.18.1

keycloak

9.0.0

Container Cloud

admission-controller Updated

1.18.1

byo-credentials-controller Updated

1.18.1

byo-provider Updated

1.18.4

kaas-public-api Updated

1.18.1

kaas-exporter Updated

1.18.1

kaas-ui Updated

1.18.3

lcm-controller Updated

0.2.0-289-gd7e9fa9c

mcc-cache Updated

1.18.1

proxy-controller Updated

1.18.1

release-controller Updated

1.18.1

rhellicense-controller New

1.18.1

squid-proxy

0.0.1-1

OpenStack Updated

openstack-provider

1.18.4

os-credentials-controller

1.18.1

VMware vSphere Updated

vsphere-provider

1.18.7

vsphere-credentials-controller

1.18.1

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.6.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-4.0.4.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-4.0.4.tgz

ironic-python-agent-bionic.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-victoria-bionic-debug-20210226182519

ironic-python-agent-bionic.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-victoria-bionic-debug-20210226182519

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-4.0.4.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-1.0.4-mcp.tgz

Docker images

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-bionic-20210216135743

httpd

mirantis.azurecr.io/lcm/nginx:1.18.0

ironic Updated

mirantis.azurecr.io/openstack/ironic:victoria-bionic-20210302180018

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:victoria-bionic-20210302180018

ironic-operator Updated

mirantis.azurecr.io/bm/ironic-operator:base-bionic-20210301104323

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-bionic-20210304134548

mariadb

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20210203155435


Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.18.6.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.18.6.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.18.1.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.18.1.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.18.4.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.18.6.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.18.1.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.18.4.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.18.1.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.18.1.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.18.1.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.18.3.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.18.1.tgz

mcc-cache

https://binary.mirantis.com/core/helm/mcc-cache-1.18.1.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.18.4.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.18.1.tgz

proxy-controller

https://binary.mirantis.com/core/helm/proxy-controller-1.18.1.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.18.1.tgz

rhellicense-controller New

https://binary.mirantis.com/core/helm/rhellicense-controller-1.18.1.tgz

squid-proxy

https://binary.mirantis.com/core/helm/squid-proxy-1.18.1.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.18.1.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.18.7.tgz

Docker images

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.18.1

aws-cluster-api-controller Updated

mirantis.azurecr.io/core/aws-cluster-api-controller:1.18.4

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.18.1

byo-cluster-api-controller Updated

mirantis.azurecr.io/core/byo-cluster-api-controller:1.18.4

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.18.1

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.18.6

frontend Updated

mirantis.azurecr.io/core/frontend:1.18.3

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.18.1

kproxy Updated

mirantis.azurecr.io/lcm/kproxy:1.18.1

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:v0.2.0-289-gd7e9fa9c

nginx

mirantis.azurecr.io/lcm/nginx:1.18.0

openstack-cluster-api-controller Updated

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.18.4

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.18.1

registry

mirantis.azurecr.io/lcm/registry:2.7.1

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.18.1

rhellicense-controller New

mirantis.azurecr.io/core/rhellicense-controller:1.18.1

squid-proxy

mirantis.azurecr.io/core/squid-proxy:0.0.1-1

vsphere-cluster-api-controller Updated

mirantis.azurecr.io/core/vsphere-api-controller:1.18.7

vsphere-credentials-controller Updated

mirantis.azurecr.io/core/vsphere-credentials-controller:1.18.1


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

iamctl-linux Updated

http://binary.mirantis.com/iam/bin/iamctl-0.4.0-linux

iamctl-darwin Updated

http://binary.mirantis.com/iam/bin/iamctl-0.4.0-darwin

iamctl-windows Updated

http://binary.mirantis.com/iam/bin/iamctl-0.4.0-windows

Helm charts Updated

iam

http://binary.mirantis.com/iam/helm/iam-2.0.0tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

keycloak-proxy Updated

http://binary.mirantis.com/core/helm/keycloak_proxy-1.18.7.tgz

Docker images

api

mirantis.azurecr.io/iam/api:0.4.0

auxiliary

mirantis.azurecr.io/iam/auxiliary:0.4.0

kubernetes-entrypoint

mirantis.azurecr.io/iam/external/kubernetes-entrypoint:v0.3.1

mariadb

mirantis.azurecr.io/general/mariadb:10.4.16-bionic-20201105025052

keycloak

mirantis.azurecr.io/iam/keycloak:0.4.0

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:6.0.1

2.5.0

The Mirantis Container Cloud GA release 2.5.0:

  • Introduces support for the Cluster release 5.12.0 that is based on Kubernetes 1.18, Mirantis Container Runtime 19.03.14, and the updated version of Mirantis Kubernetes Engine 3.3.6.

  • Introduces support for the Cluster release 6.12.0 that is based on the Cluster release 5.12.0 and supports Mirantis OpenStack for Kubernetes (MOS) 21.1.

  • Still supports previous Cluster releases 5.11.0 and 6.10.0 that are now deprecated and will become unsupported in one of the following Container Cloud releases.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.5.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.5.0. For the list of enhancements in the Cluster release 5.12.0 and Cluster release 6.12.0 that are supported by the Container Cloud release 2.5.0, see the 5.12.0 and 6.12.0 sections.


Updated version of Mirantis Kubernetes Engine

Updated the Mirantis Kubernetes Engine (MKE) version to 3.3.6 for the Container Cloud management and managed clusters.

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

Proxy support for OpenStack and VMware vSphere providers

Implemented proxy support for OpenStack-based and vSphere-based Technology Preview clusters. If you require all Internet access to go through a proxy server for security and audit purposes, you can now bootstrap management and regional clusters using proxy.

You can also enable a separate proxy access on an OpenStack-based managed cluster using the Container Cloud web UI. This proxy is intended for the end user needs and is not used for a managed cluster deployment or for access to the Mirantis resources.

Note

The proxy support for:

  • The OpenStack provider is generally available.

  • The VMware vSphere provider is available as Technology Preview. For the Technology Preview feature definition, refer to Technology Preview features.

  • The AWS and bare metal providers is in the development stage and will become available in the future Container Cloud releases.

Artifacts caching

Introduced artifacts caching support for all Container Cloud providers to enable deployment of managed clusters without direct Internet access. The Mirantis artifacts used during managed clusters deployment are downloaded through a cache running on a regional cluster.

The feature is enabled by default on new managed clusters based on the Cluster releases 5.12.0 and 6.12.0 and will be automatically enabled on existing clusters during upgrade to the latest version.

NTP server configuration on regional clusters

Implemented the possibility to configure regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region. The feature is applicable to all supported cloud providers. The NTP server parameters can be added before or after management and regional clusters deployment.

Optimized ClusterRelease upgrade process

Optimized the ClusterRelease upgrade process by enabling the Container Cloud provider to upgrade the LCMCluster components, such as MKE, before the HelmBundle components, such as StackLight or Ceph.

Dedicated network for external connection to the Kubernetes services

Technology Preview

Implemented the k8s-ext bridge in L2 templates that allows you to use a dedicated network for external connection to the Kubernetes services exposed by the cluster. When using such bridge, the MetalLB ranges and the IP addresses provided by the subnet that is associated with the bridge must fit in the same CIDR.

If enabled, MetalLB will listen and respond on the dedicated virtual bridge. Also, you can create additional subnets to configure additional address ranges for MetalLB.

Caution

Use of a dedicated network for Kubernetes pods traffic, for external connection to the Kubernetes services exposed by the cluster, and for the Ceph cluster access and replication traffic is available as Technology Preview. Use such configurations for testing and evaluation purposes only. For the Technology Preview feature definition, refer to Technology Preview features.

Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.5.0 and the Cluster releases 5.12.0 and 6.12.0:

  • [10453] [LCM] Fixed the issue with time synchronization on nodes that could cause networking issues.

  • [9748] [LCM] Fixed the issue with the false-positive helmRelease success status in HelmBundle during Helm upgrade operations.

  • [9748] [LCM] Fixed the issue with the false-positive helmRelease success status in HelmBundle during Helm upgrade operations.

  • [8464] Fixed the issue with Helm controller and OIDC integration failing to be deleted during detach of an MKE cluster.


  • [9928] [Ceph] Fixed the issue with Ceph rebalance leading to data loss during a managed cluster update by implementing the maintenance label to be set before and unset after the cluster update.

  • [9892] [Ceph] Fixed the issue with Ceph being locked during a managed cluster update by adding the PodDisruptionBudget object that enables minimum 2 Ceph OSD nodes running without rescheduling during update.


  • [6988] [BM] Fixed the issue with LVM failing to deploy on a new disk if an old volume group with the same name already existed on the target hardware node but on the different disk.

  • [8560] [BM] Fixed the issue with manual deletion of BareMetalHost from a managed cluster leading to its silent removal without a power-off and deprovision. The fix adds the admission controller webhook to validate the old BareMetalHost when the deletion is requested.

  • [11102] [BM] Fixed the issue with Keepalived not detecting and restoring a VIP of a managed cluster node after running the netplan apply command.

  • [9905] [9906] [9909] [9914] [9921] [BM] Fixed the following Ubuntu CVEs in the bare metal Docker images:

    • CVE-2019-20477 and CVE-2020-1747 for PyYAML in vbmc:latest-20201029

    • CVE-2020-1971 for OpenSSL in the following images:

      • dnsmasq:bionic-20201105044831

      • rabbitmq-management:3.7.15-bionic-20200812044813

      • kaas-ipam:base-bionic-20201208153852

      • ironic-operator:base-bionic-20201106182102

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.5.0 including the Cluster release 5.12.0 and 6.12.0.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


AWS
[8013] Managed cluster deployment requiring PVs may fail

Fixed in the Cluster release 7.0.0

Note

The issue below affects only the Kubernetes 1.18 deployments. Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

On a management cluster with multiple AWS-based managed clusters, some clusters fail to complete the deployments that require persistent volumes (PVs), for example, Elasticsearch. Some of the affected pods get stuck in the Pending state with the pod has unbound immediate PersistentVolumeClaims and node(s) had volume node affinity conflict errors.

Warning

The workaround below applies to HA deployments where data can be rebuilt from replicas. If you have a non-HA deployment, back up any existing data before proceeding, since all data will be lost while applying the workaround.

Workaround:

  1. Obtain the persistent volume claims related to the storage mounts of the affected pods:

    kubectl get pod/<pod_name1> pod/<pod_name2> \
    -o jsonpath='{.spec.volumes[?(@.persistentVolumeClaim)].persistentVolumeClaim.claimName}'
    

    Note

    In the command above and in the subsequent steps, substitute the parameters enclosed in angle brackets with the corresponding values.

  2. Delete the affected Pods and PersistentVolumeClaims to reschedule them: For example, for StackLight:

    kubectl -n stacklight delete \
    
      pod/<pod_name1> pod/<pod_name2> ...
      pvc/<pvc_name2> pvc/<pvc_name2> ...
    


vSphere
[11633] A vSphere-based project cannot be cleaned up

Fixed in Container Cloud 2.6.0

A vSphere-based managed cluster project can fail to be cleaned up because of stale secret(s) related to the RHEL license object(s). Before you can successfully clean up such project, manually delete the secret using the steps below.

Workaround:

  1. Log in to a local machine where your management cluster kubeconfig is located and where kubectl is installed.

  2. Obtain the list of stale secrets:

    kubectl --kubeconfig <kubeconfigPath> get secrets -n <projectName>
    
  3. Open each secret for editing:

    kubectl --kubeconfig <kubeconfigPath> edit secret <secret name> -n <projectName>
    
  4. Remove the following lines:

    finalizers:
    - kaas.mirantis.com/credentials-secret
    
  5. Remove stale secrets:

    kubectl --kubeconfig <kubeconfigPath> delete secret <secretName> -n <projectName>
    

Bare metal
[7655] Wrong status for an incorrectly configured L2 template

Fixed in 2.11.0

If an L2 template is configured incorrectly, a bare metal cluster is deployed successfully but with the runtime errors in the IpamHost object.

Workaround:

If you suspect that the machine is not working properly because of incorrect network configuration, verify the status of the corresponding IpamHost object. Inspect the l2RenderResult and ipAllocationResult object fields for error messages.


[9875] Full preflight fails with a timeout waiting for BareMetalHost

Fixed in Container Cloud 2.6.0

If you run bootstrap.sh preflight with KAAS_BM_FULL_PREFLIGHT=true, the script fails with the following message:

failed to create BareMetal objects: failed to wait for objects of kinds BareMetalHost
to become available: timed out waiting for the condition

As a workaround, unset full preflight using unset KAAS_BM_FULL_PREFLIGHT to run fast preflight instead.

[11468] Pods using LVP PV are not mounted to LVP disk

Fixed in Container Cloud 2.6.0

The persistent volumes (PVs) that are created using local volume provisioner (LVP), are not mounted on the dedicated disk labeled as local-volume and use the root volume instead. In the workaround below, we use StackLight volumes as an example.

Workaround:

  1. Identify whether your cluster is affected:

    1. Log in to any control plane node on the management cluster.

    2. Run the following command:

      findmnt /mnt/local-volumes/stacklight/elasticsearch-data/vol00
      

      In the output, inspect the SOURCE column. If the path starts with /dev/mapper/lvm_root-root, the host is affected by the issue.

      Example of system response:

      TARGET                                                 SOURCE                                                                                FSTYPE OPTIONS
      /mnt/local-volumes/stacklight/elasticsearch-data/vol00 /dev/mapper/lvm_root-root[/var/lib/local-volumes/stacklight/elasticsearch-data/vol00] ext4   rw,relatime,errors=remount-ro,data=ordered
      
    3. Verify other StackLight directories by replacing elasticsearch-data in the command above with the corresponding folders names.

      If your cluster is affected, follow the steps below to manually move all data for volumes that must be on the dedicated disk to the mounted device.

  2. Identify all nodes that run the elasticsearch-master pod:

    kubectl -n stacklight get pods -o wide | grep elasticsearch-master
    

    Apply the steps below to all nodes provided in the output.

  3. Identify the mount point for the dedicated device /dev/mapper/lvm_lvp-lvp. Typically, this device is mounted as /mnt/local-volumes.

    findmnt /mnt/local-volumes
    

    Verify that SOURCE for the /mnt/local-volumes mount target is /dev/mapper/lvm_lvp-lvp on all the nodes.

  4. Create new source directories for the volumes on the dedicated device /dev/mapper/lvm_lvp-lvp:

    mkdir -p /mnt/local-volumes/src/stacklight/elasticsearch-data/vol00
    
  5. Stop the pods that use the volumes to ensure that the data is not corrupted during the switch. Set the number of replicas in StatefulSet to 0:

    kubectl -n stacklight edit statefulset elasticsearch-master
    

    Wait until all elasticsearch-master pods are stopped.

  6. Move the Elasticsearch data from the current location to the new directory:

    cp -pR /var/lib/local-volumes/stacklight/elasticsearch-data/vol00/** /mnt/local-volumes/src/stacklight/elasticsearch-data/vol00/
    
  7. Unmount the old source directory from the volume mount point:

    umount /mnt/local-volumes/stacklight/elasticsearch-data/vol00
    

    Apply this step and the next one to every node with the /mnt/local-volumes/stacklight/elasticsearch-data/vol00 volume.

  8. Remount the new source directory to the volume mount point:

    mount --bind /mnt/local-volumes/src/stacklight/elasticsearch-data/vol00 /mnt/local-volumes/stacklight/elasticsearch-data/vol00
    
  9. Edit the Cluster object by adding the highlighted parameters below for the StackLight Helm chart:

    kubectl --kubeconfig <mgmtClusterKubeconfig> edit -n <projectName> cluster <managedClusterName>
    
    spec:
      helmReleases:
      - name: stacklight
        values:
          ...
          elasticsearch:
            clusterHealthCheckParams: wait_for_status=red&timeout=1s
    
  10. Start the Elasticsearch pods by setting the number of replicas in StatefulSet to 3:

    kubectl -n stacklight edit statefulset elasticsearch-master
    

    Wait until all elasticsearch-master pods are up and running.

  11. Remove the previously added clusterHealthCheckParams parameters from the Cluster object.

  12. In /etc/fstab on every node that has the volume /mnt/local-volumes/stacklight/elasticsearch-data/vol00, edit the following entry:

    /var/lib/local-volumes/stacklight/elasticsearch-data/vol00 /mnt/local-volumes/stacklight/elasticsearch-data/vol00 none bind 0 0
    

    In this entry, replace the old directory /var/lib/local-volumes/stacklight/elasticsearch-data/vol00 with the new one: /mnt/local-volumes/src/stacklight/elasticsearch-data/vol00.


Storage
[10060] Ceph OSD node removal fails

Fixed in Container Cloud 2.7.0

A Ceph node removal is not being triggered properly after updating the KaasCephCluster custom resource (CR). Both management and managed clusters are affected.

Workaround:

  1. Remove the parameters for a Ceph OSD from the KaasCephCluster CR as described in Operations Guide: Add, remove, or reconfigure Ceph nodes.

  2. Obtain the IDs of the osd and mon services that are located on the old node:

    1. Obtain the UID of the affected machine:

      kubectl get machine <CephOSDNodeName> -n <ManagedClusterProjectName> -o jsonpath='{.metadata.annotations.kaas\.mirantis\.com\/uid}'
      
    2. Export kubeconfig of your managed cluster. For example:

      export KUBECONFIG=~/Downloads/kubeconfig-test-cluster.yml
      
    3. Identify the pods IDs that run the osd and mon services:

      kubectl get pods -o wide -n rook-ceph | grep <affectedMachineUID> | grep -E "mon|osd"
      

      Example of the system response extract:

      rook-ceph-mon-c-7bbc5d757d-5bpws                              1/1  Running    1  6h1m
      rook-ceph-osd-2-58775d5568-5lklw                              1/1  Running    4  44h
      rook-ceph-osd-prepare-705ae6c647cfdac928c63b63e2e2e647-qn4m9  0/1  Completed  0  94s
      

      The pods IDs include the osd or mon services IDs. In the example system response above, the osd ID is 2 and the mon ID is c.

  3. Delete the deployments of the osd and mon services obtained in the previous step:

    kubectl delete deployment rook-ceph-osd(mon)-<ID> -n rook-ceph
    

    For example:

    kubectl delete deployment rook-ceph-mon-c -n rook-ceph
    kubectl delete deployment rook-ceph-osd-2 -n rook-ceph
    
  4. Log in to the ceph-tools pod:

    kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
    
  5. Rebalance the Ceph OSDs:

    ceph osd out osd(s).ID
    

    Wait for the rebalance to complete.

  6. Rebalance the Ceph data:

    ceph osd purge osd(s).ID
    

    Wait for the Ceph data to rebalance.

  7. Remove the old node from the Ceph OSD tree:

    ceph osd crush rm <NodeName>
    
  8. If the removed node contained mon services, remove them:

    ceph mon rm <monID>
    
[7073] Cannot automatically remove a Ceph node

Fixed in 2.16.0

When removing a worker node, it is not possible to automatically remove a Ceph node. The workaround is to manually remove the Ceph node from the Ceph cluster as described in Operations Guide: Add, remove, or reconfigure Ceph nodes before removing the worker node from your deployment.

[10050] Ceph OSD pod is in the CrashLoopBackOff state after disk replacement

Fixed in 2.11.0

If you use a custom BareMetalHostProfile, after disk replacement on a Ceph OSD, the Ceph OSD pod switches to the CrashLoopBackOff state due to the Ceph OSD authorization key failing to be created properly.

Workaround:

  1. Export kubeconfig of your managed cluster. For example:

    export KUBECONFIG=~/Downloads/kubeconfig-test-cluster.yml
    
  2. Log in to the ceph-tools pod:

    kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
    
  3. Delete the authorization key for the failed Ceph OSD:

    ceph auth del osd.<ID>
    
  4. SSH to the node on which the Ceph OSD cannot be created.

  5. Clean up the disk that will be a base for the failed Ceph OSD. For details, see official Rook documentation.

    Note

    Ignore failures of the sgdisk --zap-all $DISK and blkdiscard $DISK commands if any.

  6. On the managed cluster, restart Rook Operator:

    kubectl -n rook-ceph delete pod -l app=rook-ceph-operator
    


IAM
[10829] Keycloak pods fail to start during a management cluster bootstrap

Fixed in Container Cloud 2.6.0

The Keycloak pods may fail to start during a management cluster bootstrap with the Failed to update database exception in logs.

Caution

The following workaround is applicable only to deployments where mariadb-server has started successfully. Otherwise, fix the issues with MariaDB first.

Workaround:

  1. Verify that mariadb-server has started:

    kubectl get po -n kaas | grep mariadb-server
    
  2. Scale down the Keycloak instances:

    kubectl scale sts iam-keycloak --replicas=0 -n kaas
    
  3. Open the iam-keycloak-sh configmap for editing:

    kubectl edit cm -n kaas iam-keycloak-sh
    
  4. On the last line of the configmap, before the $MIGRATION_ARGS variable, add the following parameter:

    -Djboss.as.management.blocking.timeout=<RequiredValue>
    

    The recommended timeout value is minimum 15 minutes set in seconds. For example, -Djboss.as.management.blocking.timeout=900.

  5. Open the iam-keycloak-startup configmap for editing:

    kubectl edit cm -n kaas iam-keycloak-startup
    
  6. In the iam-keycloak-startup configmap, add the following line:

    /subsystem=transactions/:write-attribute(name=default-timeout,value=<RequiredValue>)
    

    The recommended timeout value is minimum 15 minutes set in seconds.

  7. In the Keycloak StatefulSet, adjust liveness probe timeouts:

    kubectl edit sts -n kaas iam-keycloak
    
  8. Scale up the Keycloak instances:

    kubectl scale sts iam-keycloak --replicas=3 -n kaas
    

LCM
[10029] Authentication fails with the 401 Unauthorized error

Authentication may not work on some controller nodes after a managed cluster creation. As a result, the Kubernetes API operations with the managed cluster kubeconfig fail with Response Status: 401 Unauthorized.

As a workaround, manually restart the ucp-controller and ucp-auth Docker services on the affected node.

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

[6066] Helm releases get stuck in FAILED or UNKNOWN state

Note

The issue affects only Helm v2 releases and is addressed for Helm v3. Starting from Container Cloud 2.19.0, all Helm releases are switched to v3.

During a management, regional, or managed cluster deployment, Helm releases may get stuck in the FAILED or UNKNOWN state although the corresponding machines statuses are Ready in the Container Cloud web UI. For example, if the StackLight Helm release fails, the links to its endpoints are grayed out in the web UI. In the cluster status, providerStatus.helm.ready and providerStatus.helm.releaseStatuses.<releaseName>.success are false.

HelmBundle cannot recover from such states and requires manual actions. The workaround below describes the recovery steps for the stacklight release that got stuck during a cluster deployment. Use this procedure as an example for other Helm releases as required.

Workaround:

  1. Verify the failed release has the UNKNOWN or FAILED status in the HelmBundle object:

    kubectl --kubeconfig <regionalClusterKubeconfigPath> get helmbundle <clusterName> -n <clusterProjectName> -o=jsonpath={.status.releaseStatuses.stacklight}
    
    In the command above and in the steps below, replace the parameters
    enclosed in angle brackets with the corresponding values of your cluster.
    

    Example of system response:

    stacklight:
    attempt: 2
    chart: ""
    finishedAt: "2021-02-05T09:41:05Z"
    hash: e314df5061bd238ac5f060effdb55e5b47948a99460c02c2211ba7cb9aadd623
    message: '[{"occurrence":1,"lastOccurrenceDate":"2021-02-05 09:41:05","content":"error
      updating the release: rpc error: code = Unknown desc = customresourcedefinitions.apiextensions.k8s.io
      \"helmbundles.lcm.mirantis.com\" already exists"}]'
    notes: ""
    status: UNKNOWN
    success: false
    version: 0.1.2-mcp-398
    
  2. Log in to the helm-controller pod console:

    kubectl --kubeconfig <affectedClusterKubeconfigPath> exec -n kube-system -it helm-controller-0 sh -c tiller
    
  3. Download the Helm v3 binary. For details, see official Helm documentation.

  4. Remove the failed release:

    helm delete <failed-release-name>
    

    For example:

    helm delete stacklight
    

    Once done, the release triggers for redeployment.



StackLight
[11001] Patroni pod fails to start

Fixed in Container Cloud 2.6.0

After the management cluster update, a Patroni pod may fail to start and remain in the CrashLoopBackOff status. Messages similar to the following ones may be present in Patroni logs:

Local timeline=4 lsn=0/A000000
master_timeline=6
master: history=1 0/1ADEB48       no recovery target specified
2       0/8044500       no recovery target specified
3       0/A0000A0       no recovery target specified
4       0/A1B6CB0       no recovery target specified
5       0/A2C0C80       no recovery target specified

As a workaround, reinitialize the affected pod with a new volume by deleting the pod itself and the associated PersistentVolumeClaim (PVC).

Workaround:

  1. Obtain the PVC of the affected pod:

    kubectl -n stacklight get "pod/${POD_NAME}" -o jsonpath='{.spec.volumes[?(@.name=="storage-volume")].persistentVolumeClaim.claimName}'
    
  2. Delete the affected pod and its PVC:

    kubectl -n stacklight delete "pod/${POD_NAME}" "pvc/${POD_PVC}"
    sleep 3  # wait for StatefulSet to reschedule the pod, but miss dependent PVC creation
    kubectl -n stacklight delete "pod/${POD_NAME}"
    

Management and regional clusters
[9899] Helm releases get stuck in PENDING_UPGRADE during cluster update

Fixed in 2.14.0

Helm releases may get stuck in the PENDING_UPGRADE status during a management or managed cluster upgrade. The HelmBundle Controller cannot recover from this state and requires manual actions. The workaround below describes the recovery process for the openstack-operator release that stuck during a managed cluster update. Use it as an example for other Helm releases as required.

Workaround:

  1. Log in to the helm-controller pod console:

    kubectl exec -n kube-system -it helm-controller-0 sh -c tiller
    
  2. Identify the release that stuck in the PENDING_UPGRADE status. For example:

    ./helm --host=localhost:44134 history openstack-operator
    

    Example of system response:

    REVISION  UPDATED                   STATUS           CHART                      DESCRIPTION
    1         Tue Dec 15 12:30:41 2020  SUPERSEDED       openstack-operator-0.3.9   Install complete
    2         Tue Dec 15 12:32:05 2020  SUPERSEDED       openstack-operator-0.3.9   Upgrade complete
    3         Tue Dec 15 16:24:47 2020  PENDING_UPGRADE  openstack-operator-0.3.18  Preparing upgrade
    
  3. Roll back the failed release to the previous revision:

    1. Download the Helm v3 binary. For details, see official Helm documentation.

    2. Roll back the failed release:

      helm rollback <failed-release-name>
      

      For example:

      helm rollback openstack-operator 2
      

    Once done, the release will be reconciled.


[10424] Regional cluster cleanup fails by timeout

An OpenStack-based regional cluster cleanup fails with the timeout error.

Workaround:

  1. Wait for the Cluster object to be deleted in the bootstrap cluster:

    kubectl --kubeconfig <(./bin/kind get kubeconfig --name clusterapi) get cluster
    

    The system output must be empty.

  2. Remove the bootstrap cluster manually:

    ./bin/kind delete cluster --name clusterapi
    


Container Cloud web UI
[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.5.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.17.4

aws-credentials-controller

1.17.4

Bare metal

baremetal-operator Updated

3.2.1

baremetal-public-api Updated

3.2.1

baremetal-provider Updated

1.17.6

httpd Updated

1.18.0

ironic Updated

ussuri-bionic-20210202180025

ironic-operator

base-bionic-20210106163336

kaas-ipam Updated

base-bionic-20210218141033

local-volume-provisioner

1.0.4-mcp

mariadb Updated

10.4.17-bionic-20210203155435

IAM

iam Updated

1.3.0

iam-controller Updated

1.17.4

keycloak

9.0.0

Container Cloud Updated

admission-controller

1.17.5

byo-credentials-controller

1.17.4

byo-provider

1.17.4

kaas-public-api

1.17.4

kaas-exporter

1.17.4

kaas-ui

1.17.4

lcm-controller

0.2.0-259-g71792430

mcc-cache New

1.17.4

proxy-controller New

1.17.4

release-controller

1.17.4

squid-proxy New

0.0.1-1

OpenStack Updated

openstack-provider

1.17.4

os-credentials-controller

1.17.4

VMware vSphere Updated

vsphere-provider

1.17.6

vsphere-credentials-controller

1.17.4

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.5.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-3.2.1.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-3.2.1.tgz

ironic-python-agent-bionic.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-ussuri-bionic-debug-20210204084827

ironic-python-agent-bionic.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-ussuri-bionic-debug-20210204084827

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-3.2.1.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-1.0.4-mcp.tgz

Docker images

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-bionic-20201113171304

httpd Updated

mirantis.azurecr.io/lcm/nginx:1.18.0

ironic Updated

mirantis.azurecr.io/openstack/ironic:ussuri-bionic-20210202180025

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:ussuri-bionic-20210202180025

ironic-operator

mirantis.azurecr.io/bm/ironic-operator:base-bionic-20210106163336

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-bionic-20210218141033

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20210203155435


Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.17.5.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.17.5.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.17.4.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.17.4.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.17.4.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.17.4.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.17.4.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.17.4.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.17.4.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.17.4.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.17.4.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.17.4.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.17.4.tgz

mcc-cache New

https://binary.mirantis.com/core/helm/mcc-cache-1.17.4.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.17.4.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.17.4.tgz

proxy-controller New

https://binary.mirantis.com/core/helm/proxy-controller-1.17.4.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.17.4.tgz

squid-proxy New

https://binary.mirantis.com/core/helm/squid-proxy-1.17.4.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.17.4.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.17.4.tgz

Docker images Updated

admission-controller

mirantis.azurecr.io/core/admission-controller:1.17.5

aws-cluster-api-controller

mirantis.azurecr.io/core/aws-cluster-api-controller:1.17.4

aws-credentials-controller

mirantis.azurecr.io/core/aws-credentials-controller:1.17.4

byo-cluster-api-controller

mirantis.azurecr.io/core/byo-cluster-api-controller:1.17.4

byo-credentials-controller

mirantis.azurecr.io/core/byo-credentials-controller:1.17.4

cluster-api-provider-baremetal

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.17.6

frontend

mirantis.azurecr.io/core/frontend:1.17.4

iam-controller

mirantis.azurecr.io/core/iam-controller:1.17.4

kproxy New

mirantis.azurecr.io/lcm/kproxy:1.17.4

lcm-controller

mirantis.azurecr.io/core/lcm-controller:v0.2.0-259-g71792430

nginx New

mirantis.azurecr.io/lcm/nginx:1.18.0

openstack-cluster-api-controller

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.17.4

os-credentials-controller

mirantis.azurecr.io/core/os-credentials-controller:1.17.4

registry New

mirantis.azurecr.io/lcm/registry:2.7.1

release-controller

mirantis.azurecr.io/core/release-controller:1.17.4

squid-proxy New

mirantis.azurecr.io/core/squid-proxy:0.0.1-1

vsphere-cluster-api-controller

mirantis.azurecr.io/core/vsphere-api-controller:1.17.6

vsphere-credentials-controller

mirantis.azurecr.io/core/vsphere-credentials-controller:1.17.6


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

iamctl-linux

http://binary.mirantis.com/iam/bin/iamctl-0.3.19-linux

iamctl-darwin

http://binary.mirantis.com/iam/bin/iamctl-0.3.19-darwin

iamctl-windows

http://binary.mirantis.com/iam/bin/iamctl-0.3.19-windows

Helm charts Updated

iam

http://binary.mirantis.com/iam/helm/iam-1.3.0.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

keycloak-proxy

http://binary.mirantis.com/core/helm/keycloak_proxy-1.17.4.tgz

Docker images

api Updated

mirantis.azurecr.io/iam/api:0.4.0

auxiliary Updated

mirantis.azurecr.io/iam/auxiliary:0.4.0

kubernetes-entrypoint Updated

mirantis.azurecr.io/iam/external/kubernetes-entrypoint:v0.3.1

mariadb

mirantis.azurecr.io/general/mariadb:10.4.16-bionic-20201105025052

keycloak Updated

mirantis.azurecr.io/iam/keycloak:0.4.0

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:6.0.1

2.4.0

The Mirantis Container Cloud GA release 2.4.0:

  • Introduces support for the Cluster release 5.11.0 that is based on Kubernetes 1.18, Mirantis Kubernetes Engine 3.3.4, and the updated version of Mirantis Container Runtime 19.03.14.

  • Supports the Cluster release 6.10.0 that is based on the Cluster release 5.10.0 and supports Mirantis OpenStack for Kubernetes (MOSK) Ussuri.

  • Still supports previous Cluster releases 5.10.0 and 6.8.1 that are now deprecated and will become unsupported in one of the following Container Cloud releases.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.4.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.4.0. For the list of enhancements in the Cluster release 5.11.0 and Cluster release 6.10.0 that are supported by the Container Cloud release 2.4.0, see the 5.11.0 and 6.10.0 sections.


Support for the updated version of Mirantis Container Runtime

Updated the Mirantis Container Runtime (MCR) version to 19.03.14 for all types of Container Cloud clusters.

For the MCR release highlights, see MCR documentation: MCR release notes.

Caution

Due to the development limitations, the MCR upgrade to version 19.03.13 or 19.03.14 on existing Container Cloud clusters is not supported.

Dedicated network for Kubernetes pods traffic on bare metal clusters

Technology Preview

Implemented the k8s-pods bridge in L2 templates that allows you to use a dedicated network for Kubernetes pods traffic. When the k8s-pods bridge is defined in an L2 template, Calico CNI uses that network for routing the pods traffic between nodes.

Caution

Using of a dedicated network for Kubernetes pods traffic described above is available as Technology Preview. Use such configuration for testing and evaluation purposes only. For the Technology Preview feature definition, refer to Technology Preview features.

The following features are still under development and will be announced in one of the following Container Cloud releases:

  • Switching Kubernetes API to listen to the specified IP address on the node

  • Enable MetalLB to listen and respond on the dedicated virtual bridge.

Feedback form improvement in Container Cloud web UI

Extended the functionality of the feedback form for the Container Cloud web UI. Using the Feedback button, you can now provide 5-star product rating and feedback about Container Cloud. If you have an idea or found a bug in Container Cloud, you can create a ticket for the Mirantis support team to help us improve the product.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.4.0 including the Cluster release 5.11.0 and 6.10.0.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


AWS
[8013] Managed cluster deployment requiring PVs may fail

Fixed in the Cluster release 7.0.0

Note

The issue below affects only the Kubernetes 1.18 deployments. Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

On a management cluster with multiple AWS-based managed clusters, some clusters fail to complete the deployments that require persistent volumes (PVs), for example, Elasticsearch. Some of the affected pods get stuck in the Pending state with the pod has unbound immediate PersistentVolumeClaims and node(s) had volume node affinity conflict errors.

Warning

The workaround below applies to HA deployments where data can be rebuilt from replicas. If you have a non-HA deployment, back up any existing data before proceeding, since all data will be lost while applying the workaround.

Workaround:

  1. Obtain the persistent volume claims related to the storage mounts of the affected pods:

    kubectl get pod/<pod_name1> pod/<pod_name2> \
    -o jsonpath='{.spec.volumes[?(@.persistentVolumeClaim)].persistentVolumeClaim.claimName}'
    

    Note

    In the command above and in the subsequent steps, substitute the parameters enclosed in angle brackets with the corresponding values.

  2. Delete the affected Pods and PersistentVolumeClaims to reschedule them: For example, for StackLight:

    kubectl -n stacklight delete \
    
      pod/<pod_name1> pod/<pod_name2> ...
      pvc/<pvc_name2> pvc/<pvc_name2> ...
    


Bare metal
[9875] Full preflight fails with a timeout waiting for BareMetalHost

Fixed in Container Cloud 2.6.0

If you run bootstrap.sh preflight with KAAS_BM_FULL_PREFLIGHT=true, the script fails with the following message:

failed to create BareMetal objects: failed to wait for objects of kinds BareMetalHost
to become available: timed out waiting for the condition

As a workaround, unset full preflight using unset KAAS_BM_FULL_PREFLIGHT to run fast preflight instead.

[11102] Keepalived does not detect the loss of VIP deleted by netplan

Fixed in Container Cloud 2.5.0

This issue may occur on the baremetal-based managed clusters that are created using L2 templates when network configuration is changed by the user or when Container Cloud is updated from version 2.3.0 to 2.4.0.

Due to the community issue, Keepalived 1.3.9 does not detect and restore a VIP of a managed cluster node after running the netplan apply command. The command is used to apply network configuration changes.

As a result, the Kubernetes API on the affected managed clusters becomes inaccessible.

As a workaround, log in to all nodes of the affected managed clusters and restart Keepalived using systemctl restart keepalived.

[6988] LVM fails to deploy if the volume group name already exists

Fixed in Container Cloud 2.5.0

During a management or managed cluster deployment, LVM cannot be deployed on a new disk if an old volume group with the same name already exists on the target hardware node but on the different disk.

Workaround:

In the bare metal host profile specific to your hardware configuration, add the wipe: true parameter to the device that fails to be deployed. For the procedure details, see Operations Guide: Create a custom host profile.

[7655] Wrong status for an incorrectly configured L2 template

Fixed in 2.11.0

If an L2 template is configured incorrectly, a bare metal cluster is deployed successfully but with the runtime errors in the IpamHost object.

Workaround:

If you suspect that the machine is not working properly because of incorrect network configuration, verify the status of the corresponding IpamHost object. Inspect the l2RenderResult and ipAllocationResult object fields for error messages.


[8560] Manual deletion of BareMetalHost leads to its silent removal

Fixed in Container Cloud 2.5.0

If BareMetalHost is manually removed from a managed cluster, it is silently removed without a power-off and deprovision that leads to a managed cluster failures.

Workaround:

Do not manually delete a BareMetalHost that has the Provisioned status.


Storage
[10060] Ceph OSD node removal fails

Fixed in Container Cloud 2.7.0

A Ceph node removal is not being triggered properly after updating the KaasCephCluster custom resource (CR). Both management and managed clusters are affected.

Workaround:

  1. Remove the parameters for a Ceph OSD from the KaasCephCluster CR as described in Operations Guide: Add, remove, or reconfigure Ceph nodes.

  2. Obtain the IDs of the osd and mon services that are located on the old node:

    1. Obtain the UID of the affected machine:

      kubectl get machine <CephOSDNodeName> -n <ManagedClusterProjectName> -o jsonpath='{.metadata.annotations.kaas\.mirantis\.com\/uid}'
      
    2. Export kubeconfig of your managed cluster. For example:

      export KUBECONFIG=~/Downloads/kubeconfig-test-cluster.yml
      
    3. Identify the pods IDs that run the osd and mon services:

      kubectl get pods -o wide -n rook-ceph | grep <affectedMachineUID> | grep -E "mon|osd"
      

      Example of the system response extract:

      rook-ceph-mon-c-7bbc5d757d-5bpws                              1/1  Running    1  6h1m
      rook-ceph-osd-2-58775d5568-5lklw                              1/1  Running    4  44h
      rook-ceph-osd-prepare-705ae6c647cfdac928c63b63e2e2e647-qn4m9  0/1  Completed  0  94s
      

      The pods IDs include the osd or mon services IDs. In the example system response above, the osd ID is 2 and the mon ID is c.

  3. Delete the deployments of the osd and mon services obtained in the previous step:

    kubectl delete deployment rook-ceph-osd(mon)-<ID> -n rook-ceph
    

    For example:

    kubectl delete deployment rook-ceph-mon-c -n rook-ceph
    kubectl delete deployment rook-ceph-osd-2 -n rook-ceph
    
  4. Log in to the ceph-tools pod:

    kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
    
  5. Rebalance the Ceph OSDs:

    ceph osd out osd(s).ID
    

    Wait for the rebalance to complete.

  6. Rebalance the Ceph data:

    ceph osd purge osd(s).ID
    

    Wait for the Ceph data to rebalance.

  7. Remove the old node from the Ceph OSD tree:

    ceph osd crush rm <NodeName>
    
  8. If the removed node contained mon services, remove them:

    ceph mon rm <monID>
    
[9928] Ceph rebalance during a managed cluster update

Fixed in Container Cloud 2.5.0

During a managed cluster update, Ceph rebalance leading to data loss may occur.

Workaround:

  1. Before updating a managed cluster:

    1. Log in to the ceph-tools pod:

      kubectl -n rook-ceph exec -it <ceph-tools-pod-name> bash
      
    2. Set the noout flag:

      ceph osd set noout
      
  2. Update a managed cluster.

  3. After updating a managed cluster:

    1. Log in to the ceph-tools pod:

      kubectl -n rook-ceph exec -it <ceph-tools-pod-name> bash
      
    2. Unset the noout flag:

      ceph osd unset noout
      
[7073] Cannot automatically remove a Ceph node

Fixed in 2.16.0

When removing a worker node, it is not possible to automatically remove a Ceph node. The workaround is to manually remove the Ceph node from the Ceph cluster as described in Operations Guide: Add, remove, or reconfigure Ceph nodes before removing the worker node from your deployment.

[10050] Ceph OSD pod is in the CrashLoopBackOff state after disk replacement

Fixed in 2.11.0

If you use a custom BareMetalHostProfile, after disk replacement on a Ceph OSD, the Ceph OSD pod switches to the CrashLoopBackOff state due to the Ceph OSD authorization key failing to be created properly.

Workaround:

  1. Export kubeconfig of your managed cluster. For example:

    export KUBECONFIG=~/Downloads/kubeconfig-test-cluster.yml
    
  2. Log in to the ceph-tools pod:

    kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
    
  3. Delete the authorization key for the failed Ceph OSD:

    ceph auth del osd.<ID>
    
  4. SSH to the node on which the Ceph OSD cannot be created.

  5. Clean up the disk that will be a base for the failed Ceph OSD. For details, see official Rook documentation.

    Note

    Ignore failures of the sgdisk --zap-all $DISK and blkdiscard $DISK commands if any.

  6. On the managed cluster, restart Rook Operator:

    kubectl -n rook-ceph delete pod -l app=rook-ceph-operator
    


LCM
[10029] Authentication fails with the 401 Unauthorized error

Authentication may not work on some controller nodes after a managed cluster creation. As a result, the Kubernetes API operations with the managed cluster kubeconfig fail with Response Status: 401 Unauthorized.

As a workaround, manually restart the ucp-controller and ucp-auth Docker services on the affected node.

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

[6066] Helm releases get stuck in FAILED or UNKNOWN state

Note

The issue affects only Helm v2 releases and is addressed for Helm v3. Starting from Container Cloud 2.19.0, all Helm releases are switched to v3.

During a management, regional, or managed cluster deployment, Helm releases may get stuck in the FAILED or UNKNOWN state although the corresponding machines statuses are Ready in the Container Cloud web UI. For example, if the StackLight Helm release fails, the links to its endpoints are grayed out in the web UI. In the cluster status, providerStatus.helm.ready and providerStatus.helm.releaseStatuses.<releaseName>.success are false.

HelmBundle cannot recover from such states and requires manual actions. The workaround below describes the recovery steps for the stacklight release that got stuck during a cluster deployment. Use this procedure as an example for other Helm releases as required.

Workaround:

  1. Verify the failed release has the UNKNOWN or FAILED status in the HelmBundle object:

    kubectl --kubeconfig <regionalClusterKubeconfigPath> get helmbundle <clusterName> -n <clusterProjectName> -o=jsonpath={.status.releaseStatuses.stacklight}
    
    In the command above and in the steps below, replace the parameters
    enclosed in angle brackets with the corresponding values of your cluster.
    

    Example of system response:

    stacklight:
    attempt: 2
    chart: ""
    finishedAt: "2021-02-05T09:41:05Z"
    hash: e314df5061bd238ac5f060effdb55e5b47948a99460c02c2211ba7cb9aadd623
    message: '[{"occurrence":1,"lastOccurrenceDate":"2021-02-05 09:41:05","content":"error
      updating the release: rpc error: code = Unknown desc = customresourcedefinitions.apiextensions.k8s.io
      \"helmbundles.lcm.mirantis.com\" already exists"}]'
    notes: ""
    status: UNKNOWN
    success: false
    version: 0.1.2-mcp-398
    
  2. Log in to the helm-controller pod console:

    kubectl --kubeconfig <affectedClusterKubeconfigPath> exec -n kube-system -it helm-controller-0 sh -c tiller
    
  3. Download the Helm v3 binary. For details, see official Helm documentation.

  4. Remove the failed release:

    helm delete <failed-release-name>
    

    For example:

    helm delete stacklight
    

    Once done, the release triggers for redeployment.



StackLight
[11001] Patroni pod fails to start

Fixed in Container Cloud 2.6.0

After the management cluster update, a Patroni pod may fail to start and remain in the CrashLoopBackOff status. Messages similar to the following ones may be present in Patroni logs:

Local timeline=4 lsn=0/A000000
master_timeline=6
master: history=1 0/1ADEB48       no recovery target specified
2       0/8044500       no recovery target specified
3       0/A0000A0       no recovery target specified
4       0/A1B6CB0       no recovery target specified
5       0/A2C0C80       no recovery target specified

As a workaround, reinitialize the affected pod with a new volume by deleting the pod itself and the associated PersistentVolumeClaim (PVC).

Workaround:

  1. Obtain the PVC of the affected pod:

    kubectl -n stacklight get "pod/${POD_NAME}" -o jsonpath='{.spec.volumes[?(@.name=="storage-volume")].persistentVolumeClaim.claimName}'
    
  2. Delete the affected pod and its PVC:

    kubectl -n stacklight delete "pod/${POD_NAME}" "pvc/${POD_PVC}"
    sleep 3  # wait for StatefulSet to reschedule the pod, but miss dependent PVC creation
    kubectl -n stacklight delete "pod/${POD_NAME}"
    

Management cluster update
[9899] Helm releases get stuck in PENDING_UPGRADE during cluster update

Fixed in 2.14.0

Helm releases may get stuck in the PENDING_UPGRADE status during a management or managed cluster upgrade. The HelmBundle Controller cannot recover from this state and requires manual actions. The workaround below describes the recovery process for the openstack-operator release that stuck during a managed cluster update. Use it as an example for other Helm releases as required.

Workaround:

  1. Log in to the helm-controller pod console:

    kubectl exec -n kube-system -it helm-controller-0 sh -c tiller
    
  2. Identify the release that stuck in the PENDING_UPGRADE status. For example:

    ./helm --host=localhost:44134 history openstack-operator
    

    Example of system response:

    REVISION  UPDATED                   STATUS           CHART                      DESCRIPTION
    1         Tue Dec 15 12:30:41 2020  SUPERSEDED       openstack-operator-0.3.9   Install complete
    2         Tue Dec 15 12:32:05 2020  SUPERSEDED       openstack-operator-0.3.9   Upgrade complete
    3         Tue Dec 15 16:24:47 2020  PENDING_UPGRADE  openstack-operator-0.3.18  Preparing upgrade
    
  3. Roll back the failed release to the previous revision:

    1. Download the Helm v3 binary. For details, see official Helm documentation.

    2. Roll back the failed release:

      helm rollback <failed-release-name>
      

      For example:

      helm rollback openstack-operator 2
      

    Once done, the release will be reconciled.



Container Cloud web UI
[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.4.0 and the Cluster releases 5.11.0 and 6.10.0:

  • [10351] [BM] [IPAM] Fixed the issue with the automatically allocated subnet having the ability to requeue allocation from a SubnetPool in the error state.

  • [10104] [BM] [Ceph] Fixed the issue with OpenStack services failing to access rook-ceph-mon-* pods due to the changed metadata for connection after pods restart if Ceph was deployed without hostNetwork: true.


  • [2757] [IAM] Fixed the issue with IAM failing to start with the IAM pods being in the CrashLoopBackOff state during a management cluster deployment.

  • [7562] [IAM] Disabled the http port in Keycloak to prevent security vulnerabilities.


  • [10108] [LCM] Fixed the issue with accidental upgrade of the docker-ee, docker-ee-cli, and containerd.io packages that must be pinned during the host OS upgrade.

  • [10094] [LCM] Fixed the issue with error handling in the manage-taints Ansible script.

  • [9676] [LCM] Fixed the issue with Keepalived and NGINX being installed on worker nodes instead of being installed on control plane nodes only.


  • [10323] [UI] Fixed the issue with offline tokens being expired over time if fetched using the Container Cloud web UI. The issue occurred if the Log in with Keycloak option was used.

  • [8966] [UI] Fixed the issue with the “invalid_grant”,”error_description”: “Session doesn’t have required client” error occurring over time after logging in to the Container Cloud web UI through Log in with Keycloak.

  • [10180] [UI] Fixed the issue with the SSH Keys dialog becoming blank after the token expiration.

  • [7781] [UI] Fixed the issue with the previously selected Ceph cluster machines disappearing from the drop-down menu of the Create New Ceph Cluster dialog.

  • [7843] [UI] Fixed the issue with Provider Credentials being stuck in the Processing state if created using the Add new credential option of the Create New Cluster dialog.

Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.4.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.16.1

aws-credentials-controller

1.16.1

Bare metal

baremetal-operator Updated

3.1.7

baremetal-public-api Updated

3.1.7

baremetal-provider Updated

1.16.4

httpd

2.4.46-20201001171500

ironic Updated

ussuri-bionic-20210113180016

ironic-operator Updated

base-bionic-20210106163336

kaas-ipam Updated

base-bionic-20210106163449

local-volume-provisioner

1.0.4-mcp

mariadb Updated

10.4.17-bionic-20210106145941

IAM

iam Updated

1.2.1

iam-controller Updated

1.16.1

keycloak

9.0.0

Container Cloud

admission-controller Updated

1.16.1

byo-credentials-controller Updated

1.16.1

byo-provider Updated

1.16.1

kaas-public-api Updated

1.16.1

kaas-exporter Updated

1.16.1

kaas-ui Updated

1.16.2

lcm-controller

0.2.0-224-g5c413d37

release-controller Updated

1.16.1

OpenStack Updated

openstack-provider

1.16.1

os-credentials-controller

1.16.1

VMware vSphere Updated

vsphere-provider

1.16.1

vsphere-credentials-controller

1.16.4

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.4.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-3.1.7.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-3.1.7.tgz

ironic-python-agent-bionic.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-ussuri-bionic-debug-20210108095808

ironic-python-agent-bionic.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-ussuri-bionic-debug-20210108095808

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-3.1.7.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-1.0.4-mcp.tgz

Docker images

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-bionic-20201113171304

httpd

mirantis.azurecr.io/bm/external/httpd:2.4.46-20201001171500

ironic Updated

mirantis.azurecr.io/openstack/ironic:ussuri-bionic-20210113180016

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:ussuri-bionic-20210113180016

ironic-operator Updated

mirantis.azurecr.io/bm/ironic-operator:base-bionic-20210106163336

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-bionic-20210106163449

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.4.17-bionic-20210106145941


Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.16.1.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.16.1.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.16.1.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.16.1.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.16.1.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.16.1.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.16.1.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.16.1.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.16.1.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.16.1.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.16.1.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.16.1.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.16.1.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.16.1.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.16.1.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.16.1.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.16.1.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.16.1.tgz

Docker images Updated

admission-controller

mirantis.azurecr.io/core/admission-controller:1.16.1

aws-cluster-api-controller

mirantis.azurecr.io/core/aws-cluster-api-controller:1.16.1

aws-credentials-controller

mirantis.azurecr.io/core/aws-credentials-controller:1.16.1

byo-cluster-api-controller

mirantis.azurecr.io/core/byo-cluster-api-controller:1.16.1

byo-credentials-controller

mirantis.azurecr.io/core/byo-credentials-controller:1.16.1

cluster-api-provider-baremetal

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.16.1

frontend

mirantis.azurecr.io/core/frontend:1.16.1

iam-controller

mirantis.azurecr.io/core/iam-controller:1.16.1

lcm-controller

mirantis.azurecr.io/core/lcm-controller:v0.2.0-224-g5c413d37

openstack-cluster-api-controller

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.16.1

os-credentials-controller

mirantis.azurecr.io/core/os-credentials-controller:1.16.1

release-controller

mirantis.azurecr.io/core/release-controller:1.16.1

vsphere-cluster-api-controller

mirantis.azurecr.io/core/vsphere-api-controller:1.16.1

vsphere-credentials-controller

mirantis.azurecr.io/core/vsphere-credentials-controller:1.16.4


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

iamctl-linux Updated

http://binary.mirantis.com/iam/bin/iamctl-0.3.19-linux

iamctl-darwin Updated

http://binary.mirantis.com/iam/bin/iamctl-0.3.19-darwin

iamctl-windows Updated

http://binary.mirantis.com/iam/bin/iamctl-0.3.19-windows

Helm charts

iam Updated

http://binary.mirantis.com/iam/helm/iam-1.2.1.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.3.tgz

keycloak-proxy Updated

http://binary.mirantis.com/core/helm/keycloak_proxy-1.16.3.tgz

Docker images

api

mirantis.azurecr.io/iam/api:0.3.18

auxiliary

mirantis.azurecr.io/iam/auxiliary:0.3.18

kubernetes-entrypoint Updated

mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.0-20200311160233

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.4.16-bionic-20201105025052

keycloak

mirantis.azurecr.io/iam/keycloak:0.3.19

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:6.0.1

2.3.0

The Mirantis Container Cloud GA release 2.3.0:

  • Introduces support for the Cluster release 5.10.0 that is based on Kubernetes 1.18 and the updated versions of Mirantis Kubernetes Engine 3.3.4 and Mirantis Container Runtime 19.03.13.

  • Introduces support for the Cluster release 6.10.0 that is based on the Cluster release 5.10.0 and supports Mirantis OpenStack for Kubernetes (MOSK) Ussuri.

  • Still supports previous Cluster releases 5.9.0 and 6.8.1 that are now deprecated and will become unsupported in one of the following Container Cloud releases.

    Caution

    Make sure to update the Cluster release version of your managed cluster before the current Cluster release version becomes unsupported by a new Container Cloud release version. Otherwise, Container Cloud stops auto-upgrade and eventually Container Cloud itself becomes unsupported.

This section outlines release notes for the Container Cloud release 2.3.0.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.3.0. For the list of enhancements in the Cluster release 5.10.0 and Cluster release 6.10.0 introduced by the Container Cloud release 2.3.0, see the 5.10.0 and 6.10.0 sections.


Updated versions of Mirantis Kubernetes Engine and Container Runtime

Updated the Mirantis Kubernetes Engine (MKE) version to 3.3.4 and the Mirantis Container Runtime (MCR) version to 19.03.13 for the Container Cloud management and managed clusters.

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

For the MCR release highlights, see MCR documentation: MCR release notes.

Caution

Due to the development limitations, the MCR upgrade to version 19.03.13 or 19.03.14 on existing Container Cloud clusters is not supported.

Additional regional cluster on VMware vSphere

Technical Preview

In scope of Technology Preview support for the VMware vSphere provider, added the capability to deploy an additional regional vSphere-based cluster on top of the vSphere management cluster to create managed clusters with different configurations if required.

Automated setup of a VM template for the VMware vSphere provider

Technical Preview

Automated the process of a VM template setup for the vSphere-based management and managed clusters deployments. The VM template is now set up by Packer using the vsphere_template flag that is integrated into bootstrap.sh.

StackLight support for VMware vSphere

Technical Preview

Added the capability to deploy StackLight on management clusters. However, such deployment has the following limitations:

  • The Kubernetes Nodes and Kubernetes Cluster Grafana dashboards may have empty panels.

  • The DockerNetworkUnhealthy and etcdGRPCRequestsSlow alerts may fail to raise.

  • The CPUThrottlingHigh, CalicoDatapaneIfaceMsgBatchSizeHigh, KubeCPUOvercommitPods, KubeMemOvercommitPods alerts, and the TargetDown alert for the prometheus-node-exporter and calico-node pods may be constantly firing.

Support of multiple host-specific L2 templates per a bare metal cluster

Added support of multiple host-specific L2 templates to be applied to different nodes of the same bare metal cluster. Now, you can use several independent host-specific L2 templates on a cluster to support different hardware configurations. For example, you can create L2 templates with a different number and layout of NICs to be applied to the specific machines of a cluster.

Improvements in the Container Cloud logs collection

Improved user experience with the Container Cloud resources logs collection by implementing collecting of logs on the Mirantis Kubernetes Engine cluster and on all Kubernetes pods, including the ones that were previously removed or failed.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.3.0 including the Cluster release 5.10.0.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


AWS
[8013] Managed cluster deployment requiring PVs may fail

Fixed in the Cluster release 7.0.0

Note

The issue below affects only the Kubernetes 1.18 deployments. Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

On a management cluster with multiple AWS-based managed clusters, some clusters fail to complete the deployments that require persistent volumes (PVs), for example, Elasticsearch. Some of the affected pods get stuck in the Pending state with the pod has unbound immediate PersistentVolumeClaims and node(s) had volume node affinity conflict errors.

Warning

The workaround below applies to HA deployments where data can be rebuilt from replicas. If you have a non-HA deployment, back up any existing data before proceeding, since all data will be lost while applying the workaround.

Workaround:

  1. Obtain the persistent volume claims related to the storage mounts of the affected pods:

    kubectl get pod/<pod_name1> pod/<pod_name2> \
    -o jsonpath='{.spec.volumes[?(@.persistentVolumeClaim)].persistentVolumeClaim.claimName}'
    

    Note

    In the command above and in the subsequent steps, substitute the parameters enclosed in angle brackets with the corresponding values.

  2. Delete the affected Pods and PersistentVolumeClaims to reschedule them: For example, for StackLight:

    kubectl -n stacklight delete \
    
      pod/<pod_name1> pod/<pod_name2> ...
      pvc/<pvc_name2> pvc/<pvc_name2> ...
    


Bare metal
[6988] LVM fails to deploy if the volume group name already exists

Fixed in Container Cloud 2.5.0

During a management or managed cluster deployment, LVM cannot be deployed on a new disk if an old volume group with the same name already exists on the target hardware node but on the different disk.

Workaround:

In the bare metal host profile specific to your hardware configuration, add the wipe: true parameter to the device that fails to be deployed. For the procedure details, see Operations Guide: Create a custom host profile.

[7655] Wrong status for an incorrectly configured L2 template

Fixed in 2.11.0

If an L2 template is configured incorrectly, a bare metal cluster is deployed successfully but with the runtime errors in the IpamHost object.

Workaround:

If you suspect that the machine is not working properly because of incorrect network configuration, verify the status of the corresponding IpamHost object. Inspect the l2RenderResult and ipAllocationResult object fields for error messages.


[8560] Manual deletion of BareMetalHost leads to its silent removal

Fixed in Container Cloud 2.5.0

If BareMetalHost is manually removed from a managed cluster, it is silently removed without a power-off and deprovision that leads to a managed cluster failures.

Workaround:

Do not manually delete a BareMetalHost that has the Provisioned status.

[9875] Full preflight fails with a timeout waiting for BareMetalHost

Fixed in Container Cloud 2.6.0

If you run bootstrap.sh preflight with KAAS_BM_FULL_PREFLIGHT=true, the script fails with the following message:

failed to create BareMetal objects: failed to wait for objects of kinds BareMetalHost
to become available: timed out waiting for the condition

As a workaround, unset full preflight using unset KAAS_BM_FULL_PREFLIGHT to run fast preflight instead.


IAM
[2757] IAM fails to start during management cluster deployment

Fixed in Container Cloud 2.4.0

During a management cluster deployment, IAM fails to start with the IAM pods being in the CrashLoopBackOff status.

Workaround:

  1. Log in to the bootstrap node.

  2. Remove the iam-mariadb-state configmap:

    kubectl delete cm -n kaas iam-mariadb-state
    
  3. Manually delete the mariadb pods:

    kubectl delete po -n kaas mariadb-server-{0,1,2}
    

    Wait for the pods to start. If the mariadb pod does not start with the connection to peer timed out exception, repeat the step 2.

  4. Obtain the MariaDB database admin password:

    kubectl get secrets -n kaas mariadb-dbadmin-password \
    -o jsonpath='{.data.MYSQL_DBADMIN_PASSWORD}' | base64 -d ; echo
    
  5. Log in to MariaDB:

    kubectl exec -it -n kaas mariadb-server-0 -- bash -c 'mysql -uroot -p<mysqlDbadminPassword>'
    

    Substitute <mysqlDbadminPassword> with the corresponding value obtained in the previous step.

  6. Run the following command:

    DROP DATABASE IF EXISTS keycloak;
    
  7. Manually delete the Keycloak pods:

    kubectl delete po -n kaas iam-keycloak-{0,1,2}
    

LCM
[10029] Authentication fails with the 401 Unauthorized error

Authentication may not work on some controller nodes after a managed cluster creation. As a result, the Kubernetes API operations with the managed cluster kubeconfig fail with Response Status: 401 Unauthorized.

As a workaround, manually restart the ucp-controller and ucp-auth Docker services on the affected node.

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.


Management cluster update
[9899] Helm releases get stuck in PENDING_UPGRADE during cluster update

Fixed in 2.14.0

Helm releases may get stuck in the PENDING_UPGRADE status during a management or managed cluster upgrade. The HelmBundle Controller cannot recover from this state and requires manual actions. The workaround below describes the recovery process for the openstack-operator release that stuck during a managed cluster update. Use it as an example for other Helm releases as required.

Workaround:

  1. Log in to the helm-controller pod console:

    kubectl exec -n kube-system -it helm-controller-0 sh -c tiller
    
  2. Identify the release that stuck in the PENDING_UPGRADE status. For example:

    ./helm --host=localhost:44134 history openstack-operator
    

    Example of system response:

    REVISION  UPDATED                   STATUS           CHART                      DESCRIPTION
    1         Tue Dec 15 12:30:41 2020  SUPERSEDED       openstack-operator-0.3.9   Install complete
    2         Tue Dec 15 12:32:05 2020  SUPERSEDED       openstack-operator-0.3.9   Upgrade complete
    3         Tue Dec 15 16:24:47 2020  PENDING_UPGRADE  openstack-operator-0.3.18  Preparing upgrade
    
  3. Roll back the failed release to the previous revision:

    1. Download the Helm v3 binary. For details, see official Helm documentation.

    2. Roll back the failed release:

      helm rollback <failed-release-name>
      

      For example:

      helm rollback openstack-operator 2
      

    Once done, the release will be reconciled.



Storage
[10060] Ceph OSD node removal fails

Fixed in Container Cloud 2.7.0

A Ceph node removal is not being triggered properly after updating the KaasCephCluster custom resource (CR). Both management and managed clusters are affected.

Workaround:

  1. Remove the parameters for a Ceph OSD from the KaasCephCluster CR as described in Operations Guide: Add, remove, or reconfigure Ceph nodes.

  2. Obtain the IDs of the osd and mon services that are located on the old node:

    1. Obtain the UID of the affected machine:

      kubectl get machine <CephOSDNodeName> -n <ManagedClusterProjectName> -o jsonpath='{.metadata.annotations.kaas\.mirantis\.com\/uid}'
      
    2. Export kubeconfig of your managed cluster. For example:

      export KUBECONFIG=~/Downloads/kubeconfig-test-cluster.yml
      
    3. Identify the pods IDs that run the osd and mon services:

      kubectl get pods -o wide -n rook-ceph | grep <affectedMachineUID> | grep -E "mon|osd"
      

      Example of the system response extract:

      rook-ceph-mon-c-7bbc5d757d-5bpws                              1/1  Running    1  6h1m
      rook-ceph-osd-2-58775d5568-5lklw                              1/1  Running    4  44h
      rook-ceph-osd-prepare-705ae6c647cfdac928c63b63e2e2e647-qn4m9  0/1  Completed  0  94s
      

      The pods IDs include the osd or mon services IDs. In the example system response above, the osd ID is 2 and the mon ID is c.

  3. Delete the deployments of the osd and mon services obtained in the previous step:

    kubectl delete deployment rook-ceph-osd(mon)-<ID> -n rook-ceph
    

    For example:

    kubectl delete deployment rook-ceph-mon-c -n rook-ceph
    kubectl delete deployment rook-ceph-osd-2 -n rook-ceph
    
  4. Log in to the ceph-tools pod:

    kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
    
  5. Rebalance the Ceph OSDs:

    ceph osd out osd(s).ID
    

    Wait for the rebalance to complete.

  6. Rebalance the Ceph data:

    ceph osd purge osd(s).ID
    

    Wait for the Ceph data to rebalance.

  7. Remove the old node from the Ceph OSD tree:

    ceph osd crush rm <NodeName>
    
  8. If the removed node contained mon services, remove them:

    ceph mon rm <monID>
    
[9928] Ceph rebalance during a managed cluster update

Fixed in Container Cloud 2.5.0

During a managed cluster update, Ceph rebalance leading to data loss may occur.

Workaround:

  1. Before updating a managed cluster:

    1. Log in to the ceph-tools pod:

      kubectl -n rook-ceph exec -it <ceph-tools-pod-name> bash
      
    2. Set the noout flag:

      ceph osd set noout
      
  2. Update a managed cluster.

  3. After updating a managed cluster:

    1. Log in to the ceph-tools pod:

      kubectl -n rook-ceph exec -it <ceph-tools-pod-name> bash
      
    2. Unset the noout flag:

      ceph osd unset noout
      
[7073] Cannot automatically remove a Ceph node

Fixed in 2.16.0

When removing a worker node, it is not possible to automatically remove a Ceph node. The workaround is to manually remove the Ceph node from the Ceph cluster as described in Operations Guide: Add, remove, or reconfigure Ceph nodes before removing the worker node from your deployment.

[10050] Ceph OSD pod is in the CrashLoopBackOff state after disk replacement

Fixed in 2.11.0

If you use a custom BareMetalHostProfile, after disk replacement on a Ceph OSD, the Ceph OSD pod switches to the CrashLoopBackOff state due to the Ceph OSD authorization key failing to be created properly.

Workaround:

  1. Export kubeconfig of your managed cluster. For example:

    export KUBECONFIG=~/Downloads/kubeconfig-test-cluster.yml
    
  2. Log in to the ceph-tools pod:

    kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
    
  3. Delete the authorization key for the failed Ceph OSD:

    ceph auth del osd.<ID>
    
  4. SSH to the node on which the Ceph OSD cannot be created.

  5. Clean up the disk that will be a base for the failed Ceph OSD. For details, see official Rook documentation.

    Note

    Ignore failures of the sgdisk --zap-all $DISK and blkdiscard $DISK commands if any.

  6. On the managed cluster, restart Rook Operator:

    kubectl -n rook-ceph delete pod -l app=rook-ceph-operator
    


Container Cloud web UI
[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.3.0 and the Cluster releases 5.10.0 and 6.10.0:

  • [8869] Upgraded kind from version 0.3.0 to 0.9.0 and the kindest/node image version from 1.14.2 to 1.18.8 to enhance the Container Cloud performance and prevent compatibility issues.

  • [8220] Fixed the issue with failure to switch the default label from one BareMetalHostProfile to another.

  • [7255] Fixed the issue with slow creation of the OpenStack clients and pools by redesigning as well as increasing efficiency and speed of ceph-controller.

  • [8618] Fixed the issue with missing pools during a Ceph cluster deployment.

  • [8111] Fixed the issue with a Ceph cluster being available after deleting it using the Container Cloud web UI or deleting the KaaSCephCluster object from the Kubernetes namespace using CLI.

  • [8409, 3836] Refactored and stabilized the upgrade procedure to prevent locks during the upgrade operations.

  • [8925] Fixed improper handling of errors in lcm-controller that may lead to its panic.

  • [8361] Fixed the issue with admission-controller allowing addition of duplicated node labels per machine.

  • [8402] Fixed the issue with the AWS provider failing during node labeling with the Observed a panic: “invalid memory address or nil pointer dereference” error if privateIP is not set for a machine.

  • [7673] Moved logs collection of the bootstrap cluster to the /bootstrap subdirectory to prevent unintentional erasure of the management and regional cluster logs.

Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.3.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.15.4

aws-credentials-controller

1.15.4

Bare metal

baremetal-operator Updated

3.1.6

baremetal-public-api Updated

3.1.6

baremetal-provider Updated

1.15.4

httpd

2.4.46-20201001171500

ironic Updated

ussuri-bionic-20201111180110

ironic-operator Updated

base-bionic-20201106182102

kaas-ipam Updated

20201210175212

local-volume-provisioner

1.0.4-mcp

mariadb

10.4.14-bionic-20200812025059

IAM

iam

1.1.22

iam-controller Updated

1.15.4

keycloak

9.0.0

Container Cloud Updated

admission-controller

1.15.4

byo-credentials-controller

1.15.4

byo-provider

1.15.4

kaas-public-api

1.15.4

kaas-exporter

1.15.4

kaas-ui

1.15.4

lcm-controller

0.2.0-224-g5c413d37

release-controller

1.15.4

OpenStack Updated

openstack-provider

1.15.4

os-credentials-controller

1.15.4

VMware vSphere Updated

vsphere-provider

1.15.4

vsphere-credentials-controller

1.15.4

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.3.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-3.1.6.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-3.1.6.tgz

ironic-python-agent.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-ussuri-bionic-debug-20201119132200

ironic-python-agent.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-ussuri-bionic-debug-20201119132200

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-3.1.6.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-1.0.4-mcp.tgz

Docker images

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-bionic-20201113171304

httpd

mirantis.azurecr.io/bm/external/httpd:2.4.46-20201001171500

ironic Updated

mirantis.azurecr.io/openstack/ironic:ussuri-bionic-20201111180110

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:ussuri-bionic-20201111180110

ironic-operator Updated

mirantis.azurecr.io/bm/ironic-operator:base-bionic-20201106182102

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-bionic-20201210175212

mariadb

mirantis.azurecr.io/general/mariadb:10.4.14-bionic-20200812025059


Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.15.4.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.15.4.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.15.4.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.15.4.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.15.4.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.15.4.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.15.4.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.15.4.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.15.4.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.15.4.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.15.4.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.15.4.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.15.4.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.15.4.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.15.4.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.15.4.tgz

vsphere-credentials-controller

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.15.4.tgz

vsphere-provider

https://binary.mirantis.com/core/helm/vsphere-provider-1.15.4.tgz

Docker images Updated

admission-controller

mirantis.azurecr.io/core/admission-controller:1.15.4

aws-cluster-api-controller

mirantis.azurecr.io/core/aws-cluster-api-controller:1.15.4

aws-credentials-controller

mirantis.azurecr.io/core/aws-credentials-controller:1.15.4

byo-cluster-api-controller

mirantis.azurecr.io/core/byo-cluster-api-controller:1.15.4

byo-credentials-controller

mirantis.azurecr.io/core/byo-credentials-controller:1.15.4

cluster-api-provider-baremetal

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.15.4

frontend

mirantis.azurecr.io/core/frontend:1.15.4

iam-controller

mirantis.azurecr.io/core/iam-controller:1.15.4

lcm-controller

mirantis.azurecr.io/core/lcm-controller:v0.2.0-224-g5c413d37

openstack-cluster-api-controller

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.15.4

os-credentials-controller

mirantis.azurecr.io/core/os-credentials-controller:1.15.4

release-controller

mirantis.azurecr.io/core/release-controller:1.15.4

vsphere-cluster-api-controller

mirantis.azurecr.io/core/vsphere-api-controller:1.15.4

vsphere-credentials-controller

mirantis.azurecr.io/core/vsphere-credentials-controller:1.15.4


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

iamctl-linux

http://binary.mirantis.com/iam/bin/iamctl-0.3.18-linux

iamctl-darwin

http://binary.mirantis.com/iam/bin/iamctl-0.3.18-darwin

iamctl-windows

http://binary.mirantis.com/iam/bin/iamctl-0.3.18-windows

Helm charts

iam Updated

http://binary.mirantis.com/iam/helm/iam-1.1.22.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.3.tgz

keycloak-proxy Updated

http://binary.mirantis.com/core/helm/keycloak_proxy-1.14.3.tgz

Docker images

api

mirantis.azurecr.io/iam/api:0.3.18

auxiliary

mirantis.azurecr.io/iam/auxiliary:0.3.18

kubernetes-entrypoint

mirantis.azurecr.io/iam/external/kubernetes-entrypoint:v0.3.1

mariadb

mirantis.azurecr.io/iam/external/mariadb:10.2.18

keycloak Updated

mirantis.azurecr.io/iam/keycloak:0.3.19

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:6.0.1

2.2.0

This section outlines release notes for the Mirantis Container Cloud GA release 2.2.0. This release introduces support for the Cluster release 5.9.0 that is based on Mirantis Kubernetes Engine 3.3.3, Mirantis Container Runtime 19.03.12, and Kubernetes 1.18. This release also introduces support for the Cluster release 6.8.1 that introduces the support of the Mirantis OpenStack for Kubernetes (MOSK) product.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.2.0. For the list of enhancements in the Cluster release 5.9.0 and Cluster release 6.8.1 introduced by the Container Cloud release 2.2.0, see 5.9.0 and 6.8.1.


Support for VMware vSphere provider on RHEL

TECHNICAL PREVIEW

Introduced the Technology Preview support for the VMware vSphere cloud provider on RHEL, including support for creation and operating of managed clusters using the Container Cloud web UI.

Deployment of an additional regional vSphere-based cluster or attaching an existing Mirantis Kubernetes Engine (MKE) cluster to a vSphere-based management cluster is on the development stage and will be announced in one of the following Container Cloud releases.

Note

For the Technology Preview feature definition, refer to Technology Preview features.

Kernel parameters management through BareMetalHostProfile

Implemented the API for managing kernel parameters typically managed by sysctl for bare metal hosts through the BareMetalHost and BareMetalHostProfile objects fields.

Support of multiple subnets per cluster

Implemented support of multiple subnets per a Container Cloud cluster with an ability to specify a different network type for each subnet. Introduced the SubnetPool object that allows for automatic creation of the Subnet objects. Also, added the L3Layout section to L2Template.spec. The L3Layout configuration allows defining the subnets scopes to be used and to enable auto-creation of subnets from a subnet pool.

Optimization of the Container Cloud logs collection

Optimized user experience with the Container Cloud resources logs collection:

  • Added a separate file with a human-readable table that contains information about cluster events

  • Implemented collecting of system logs from cluster nodes

Container Cloud API documentation for bare metal

On top of continuous improvements delivered to the existing Container Cloud guides, added the Mirantis Container Cloud API section to the Operations Guide. This section is intended only for advanced Infrastructure Operators who are familiar with Kubernetes Cluster API.

Currently, this section contains descriptions and examples of the Container Cloud API resources for the bare metal cloud provider. The API documentation for the OpenStack, AWS, and VMware vSphere API resources will be added in the upcoming Container Cloud releases.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.2.0 including the Cluster release 5.9.0.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


AWS
[8013] Managed cluster deployment requiring PVs may fail

Fixed in the Cluster release 7.0.0

Note

The issue below affects only the Kubernetes 1.18 deployments. Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

On a management cluster with multiple AWS-based managed clusters, some clusters fail to complete the deployments that require persistent volumes (PVs), for example, Elasticsearch. Some of the affected pods get stuck in the Pending state with the pod has unbound immediate PersistentVolumeClaims and node(s) had volume node affinity conflict errors.

Warning

The workaround below applies to HA deployments where data can be rebuilt from replicas. If you have a non-HA deployment, back up any existing data before proceeding, since all data will be lost while applying the workaround.

Workaround:

  1. Obtain the persistent volume claims related to the storage mounts of the affected pods:

    kubectl get pod/<pod_name1> pod/<pod_name2> \
    -o jsonpath='{.spec.volumes[?(@.persistentVolumeClaim)].persistentVolumeClaim.claimName}'
    

    Note

    In the command above and in the subsequent steps, substitute the parameters enclosed in angle brackets with the corresponding values.

  2. Delete the affected Pods and PersistentVolumeClaims to reschedule them: For example, for StackLight:

    kubectl -n stacklight delete \
    
      pod/<pod_name1> pod/<pod_name2> ...
      pvc/<pvc_name2> pvc/<pvc_name2> ...
    


Bare metal
[6988] LVM fails to deploy if the volume group name already exists

Fixed in Container Cloud 2.5.0

During a management or managed cluster deployment, LVM cannot be deployed on a new disk if an old volume group with the same name already exists on the target hardware node but on the different disk.

Workaround:

In the bare metal host profile specific to your hardware configuration, add the wipe: true parameter to the device that fails to be deployed. For the procedure details, see Operations Guide: Create a custom host profile.

[7655] Wrong status for an incorrectly configured L2 template

Fixed in 2.11.0

If an L2 template is configured incorrectly, a bare metal cluster is deployed successfully but with the runtime errors in the IpamHost object.

Workaround:

If you suspect that the machine is not working properly because of incorrect network configuration, verify the status of the corresponding IpamHost object. Inspect the l2RenderResult and ipAllocationResult object fields for error messages.


[8560] Manual deletion of BareMetalHost leads to its silent removal

Fixed in Container Cloud 2.5.0

If BareMetalHost is manually removed from a managed cluster, it is silently removed without a power-off and deprovision that leads to a managed cluster failures.

Workaround:

Do not manually delete a BareMetalHost that has the Provisioned status.


IAM
[2757] IAM fails to start during management cluster deployment

Fixed in Container Cloud 2.4.0

During a management cluster deployment, IAM fails to start with the IAM pods being in the CrashLoopBackOff status.

Workaround:

  1. Log in to the bootstrap node.

  2. Remove the iam-mariadb-state configmap:

    kubectl delete cm -n kaas iam-mariadb-state
    
  3. Manually delete the mariadb pods:

    kubectl delete po -n kaas mariadb-server-{0,1,2}
    

    Wait for the pods to start. If the mariadb pod does not start with the connection to peer timed out exception, repeat the step 2.

  4. Obtain the MariaDB database admin password:

    kubectl get secrets -n kaas mariadb-dbadmin-password \
    -o jsonpath='{.data.MYSQL_DBADMIN_PASSWORD}' | base64 -d ; echo
    
  5. Log in to MariaDB:

    kubectl exec -it -n kaas mariadb-server-0 -- bash -c 'mysql -uroot -p<mysqlDbadminPassword>'
    

    Substitute <mysqlDbadminPassword> with the corresponding value obtained in the previous step.

  6. Run the following command:

    DROP DATABASE IF EXISTS keycloak;
    
  7. Manually delete the Keycloak pods:

    kubectl delete po -n kaas iam-keycloak-{0,1,2}
    

Storage
[7073] Cannot automatically remove a Ceph node

Fixed in 2.16.0

When removing a worker node, it is not possible to automatically remove a Ceph node. The workaround is to manually remove the Ceph node from the Ceph cluster as described in Operations Guide: Add, remove, or reconfigure Ceph nodes before removing the worker node from your deployment.


Container Cloud web UI
[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.2.0 including the Cluster release 5.9.0:

  • [8012] Fixed the issue with helm-controller pod being stuck in the CrashLoopBackOff state after reattaching of a Mirantis Kubernetes Engine (MKE) cluster.

  • [7131] Fixed the issue with the deployment of a managed cluster failing during the Ceph Monitor or Manager deployment.

  • [6164] Fixed the issue with the number of placement groups (PGs) per Ceph OSD being too small and the Ceph cluster having the HEALTH_WARN status.

  • [8302] Fixed the issue with deletion of a regional cluster leading to the deletion of the related management cluster.

  • [7722] Fixed the issue with the Internal Server Error or similar errors appearing in the HelmBundle controller logs after bootstrapping the management cluster.

Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.2.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.14.0

aws-credentials-controller

1.14.0

Bare metal

baremetal-operator Updated

3.1.3

baremetal-public-api Updated

3.1.3

baremetal-provider Updated

1.14.0

httpd

2.4.46-20201001171500

ironic Updated

ussuri-bionic-20201021180016

ironic-operator Updated

base-bionic-20201023172943

kaas-ipam Updated

20201026094912

local-volume-provisioner

1.0.4-mcp

mariadb

10.4.14-bionic-20200812025059

IAM

iam Updated

1.1.22

iam-controller Updated

1.14.0

keycloak

9.0.0

Container Cloud Updated

admission-controller

1.14.0

byo-credentials-controller

1.14.0

byo-provider

1.14.3

kaas-public-api

1.14.0

kaas-exporter

1.14.0

kaas-ui

1.14.2

lcm-controller

0.2.0-178-g8cc488f8

release-controller

1.14.0

OpenStack Updated

openstack-provider

1.14.0

os-credentials-controller

1.14.0

VMware vSphere New

vsphere-provider

1.14.1

vsphere-credentials-controller

1.14.1

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.2.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

Target system image (ubuntu-bionic)

https://binary.mirantis.com/bm/bin/efi/ubuntu/qcow2-bionic-debug-20200730084816

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-3.1.3.tgz

baremetal-public-api Updated

https://binary.mirantis.com/bm/helm/baremetal-public-api-3.1.3.tgz

ironic-python-agent.kernel Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-ussuri-bionic-debug-20201022084817

ironic-python-agent.initramfs Updated

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-ussuri-bionic-debug-20201022084817

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-3.1.3.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-1.0.4-mcp.tgz

Docker images

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-bionic-20201028131325

httpd

mirantis.azurecr.io/bm/external/httpd:2.4.46-20201001171500

ironic Updated

mirantis.azurecr.io/openstack/ironic:ussuri-bionic-20201021180016

ironic-inspector Updated

mirantis.azurecr.io/openstack/ironic-inspector:ussuri-bionic-20201021180016

ironic-operator Updated

mirantis.azurecr.io/bm/ironic-operator:base-bionic-20201023172943

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-bionic-20201026094912

mariadb

mirantis.azurecr.io/general/mariadb:10.4.14-bionic-20200812025059


Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.14.0.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.14.0.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.14.0.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.14.0.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.14.0.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.14.0.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.14.0.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.14.3.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.14.0.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.14.0.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.14.0.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.14.2.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.14.0.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.14.0.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.14.0.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.14.0.tgz

vsphere-credentials-controller New

https://binary.mirantis.com/core/helm/vsphere-credentials-controller-1.14.1.tgz

vsphere-provider New

https://binary.mirantis.com/core/helm/vsphere-provider-1.14.1.tgz

Docker images for Container Cloud deployment

admission-controller Updated

mirantis.azurecr.io/core/admission-controller:1.14.0

aws-cluster-api-controller

mirantis.azurecr.io/core/aws-cluster-api-controller:1.14.0

aws-credentials-controller Updated

mirantis.azurecr.io/core/aws-credentials-controller:1.14.0

byo-cluster-api-controller

mirantis.azurecr.io/core/byo-cluster-api-controller:1.14.3

byo-credentials-controller Updated

mirantis.azurecr.io/core/byo-credentials-controller:1.14.0

cluster-api-provider-baremetal Updated

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.14.0

frontend

mirantis.azurecr.io/core/frontend:1.14.2

iam-controller Updated

mirantis.azurecr.io/core/iam-controller:1.14.0

lcm-controller Updated

mirantis.azurecr.io/core/lcm-controller:v0.2.0-178-g8cc488f8

openstack-cluster-api-controller

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.14.0

os-credentials-controller Updated

mirantis.azurecr.io/core/os-credentials-controller:1.14.0

release-controller Updated

mirantis.azurecr.io/core/release-controller:1.14.0

vsphere-cluster-api-controller New

mirantis.azurecr.io/core/vsphere-api-controller:1.14.1

vsphere-credentials-controller New

mirantis.azurecr.io/core/vsphere-credentials-controller:1.14.1


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-268-3cf7f17-darwin

iamctl-linux

http://binary.mirantis.com/iam/bin/iamctl-0.3.18-linux

iamctl-darwin

http://binary.mirantis.com/iam/bin/iamctl-0.3.18-darwin

iamctl-windows

http://binary.mirantis.com/iam/bin/iamctl-0.3.18-windows

Helm charts

iam Updated

http://binary.mirantis.com/iam/helm/iam-1.1.22.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

keycloak-proxy Updated

http://binary.mirantis.com/core/helm/keycloak_proxy-1.14.3.tgz

Docker images

api

mirantis.azurecr.io/iam/api:0.3.18

auxiliary

mirantis.azurecr.io/iam/auxiliary:0.3.18

kubernetes-entrypoint

mirantis.azurecr.io/iam/external/kubernetes-entrypoint:v0.3.1

mariadb

mirantis.azurecr.io/iam/external/mariadb:10.2.18

keycloak Updated

mirantis.azurecr.io/iam/keycloak:0.3.19

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:6.0.1

2.1.0

This section outlines release notes for the Mirantis Container Cloud GA release 2.1.0. This release introduces support for the Cluster release 5.8.0 that is based on Mirantis Kubernetes Engine 3.3.3, Mirantis Container Runtime 19.03.12, and Kubernetes 1.18.

Enhancements

This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.1.0. For the list of enhancements in the Cluster release 5.8.0 introduced by the KaaS release 2.1.0, see 5.8.0.


Node labeling for machines

Implemented the possibility to assign labels to specific machines with dedicated system and hardware resources through the Container Cloud web UI. For example, you can label the StackLight nodes that run Elasticsearch and require more resources than a standard node to run the StackLight components services on the dedicated nodes. You can label a machine before or after it is deployed. The list of available labels is taken from the current Cluster release.

Node labeling greatly improves cluster performance and prevents pods from being quickly exhausted.

AWS resources discovery in Container Cloud web UI

Improved the user experience during a managed cluster creation using the Container Cloud web UI by implementing drop-down menus with available supported values for the following AWS resources:

  • AWS region

  • AWS AMI ID

  • AWS instance type

To apply the feature to existing deployments, update the IAM policies for AWS.

Credentials statuses for OpenStack and AWS

Implemented the following statuses for the OpenStack-based and AWS-based credentials in the Container Cloud web UI:

  • Ready

    Credentials are valid and ready to be used for a managed cluster creation.

  • In Use

    Credentials are being used by a managed cluster.

  • Error

    Credentials are invalid. You can hover over the Error status to determine the reason of the issue.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.1.0.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


AWS
[8013] Managed cluster deployment requiring PVs may fail

Fixed in the Cluster release 7.0.0

Note

The issue below affects only the Kubernetes 1.18 deployments. Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

On a management cluster with multiple AWS-based managed clusters, some clusters fail to complete the deployments that require persistent volumes (PVs), for example, Elasticsearch. Some of the affected pods get stuck in the Pending state with the pod has unbound immediate PersistentVolumeClaims and node(s) had volume node affinity conflict errors.

Warning

The workaround below applies to HA deployments where data can be rebuilt from replicas. If you have a non-HA deployment, back up any existing data before proceeding, since all data will be lost while applying the workaround.

Workaround:

  1. Obtain the persistent volume claims related to the storage mounts of the affected pods:

    kubectl get pod/<pod_name1> pod/<pod_name2> \
    -o jsonpath='{.spec.volumes[?(@.persistentVolumeClaim)].persistentVolumeClaim.claimName}'
    

    Note

    In the command above and in the subsequent steps, substitute the parameters enclosed in angle brackets with the corresponding values.

  2. Delete the affected Pods and PersistentVolumeClaims to reschedule them: For example, for StackLight:

    kubectl -n stacklight delete \
    
      pod/<pod_name1> pod/<pod_name2> ...
      pvc/<pvc_name2> pvc/<pvc_name2> ...
    


Bare metal
[6988] LVM fails to deploy if the volume group name already exists

Fixed in Container Cloud 2.5.0

During a management or managed cluster deployment, LVM cannot be deployed on a new disk if an old volume group with the same name already exists on the target hardware node but on the different disk.

Workaround:

In the bare metal host profile specific to your hardware configuration, add the wipe: true parameter to the device that fails to be deployed. For the procedure details, see Operations Guide: Create a custom host profile.


IAM
[2757] IAM fails to start during management cluster deployment

Fixed in Container Cloud 2.4.0

During a management cluster deployment, IAM fails to start with the IAM pods being in the CrashLoopBackOff status.

Workaround:

  1. Log in to the bootstrap node.

  2. Remove the iam-mariadb-state configmap:

    kubectl delete cm -n kaas iam-mariadb-state
    
  3. Manually delete the mariadb pods:

    kubectl delete po -n kaas mariadb-server-{0,1,2}
    

    Wait for the pods to start. If the mariadb pod does not start with the connection to peer timed out exception, repeat the step 2.

  4. Obtain the MariaDB database admin password:

    kubectl get secrets -n kaas mariadb-dbadmin-password \
    -o jsonpath='{.data.MYSQL_DBADMIN_PASSWORD}' | base64 -d ; echo
    
  5. Log in to MariaDB:

    kubectl exec -it -n kaas mariadb-server-0 -- bash -c 'mysql -uroot -p<mysqlDbadminPassword>'
    

    Substitute <mysqlDbadminPassword> with the corresponding value obtained in the previous step.

  6. Run the following command:

    DROP DATABASE IF EXISTS keycloak;
    
  7. Manually delete the Keycloak pods:

    kubectl delete po -n kaas iam-keycloak-{0,1,2}
    

Storage
[6164] Small number of PGs per Ceph OSD

Fixed in 2.2.0

After deploying a managed cluster with Ceph, the number of placement groups (PGs) per Ceph OSD may be too small and the Ceph cluster may have the HEALTH_WARN status:

health: HEALTH_WARN
        too few PGs per OSD (3 < min 30)

The workaround is to enable the PG balancer to properly manage the number of PGs:

kexec -it $(k get pod -l "app=rook-ceph-tools" --all-namespaces -o jsonpath='{.items[0].metadata.name}') -n rook-ceph bash
ceph mgr module enable pg_autoscaler
[7131] rook-ceph-mgr fails during managed cluster deployment

Fixed in 2.2.0

Occasionally, the deployment of a managed cluster may fail during the Ceph Monitor or Manager deployment. In this case, the Ceph cluster may be down and and a stack trace similar to the following one may be present in Ceph Manager logs:

kubectl -n rook-ceph logs rook-ceph-mgr-a-c5dc846f8-k68rs

/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.9/rpm/el7/BUILD/ceph-14.2.9/src/mon/MonMap.h: In function 'void MonMap::add(const mon_info_t&)' thread 7fd3d3744b80 time 2020-09-03 10:16:46.586388
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.9/rpm/el7/BUILD/ceph-14.2.9/src/mon/MonMap.h: 195: FAILED ceph_assert(addr_mons.count(a) == 0)
ceph version 14.2.9 (581f22da52345dba46ee232b73b990f06029a2a0) nautilus (stable)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14a) [0x7fd3ca9b2875]
2: (()+0x253a3d) [0x7fd3ca9b2a3d]
3: (MonMap::add(mon_info_t const&)+0x80) [0x7fd3cad49190]
4: (MonMap::add(std::string const&, entity_addrvec_t const&, int)+0x110) [0x7fd3cad493a0]
5: (MonMap::init_with_ips(std::string const&, bool, std::string const&)+0xc9) [0x7fd3cad43849]
6: (MonMap::build_initial(CephContext*, bool, std::ostream&)+0x314) [0x7fd3cad45af4]
7: (MonClient::build_initial_monmap()+0x130) [0x7fd3cad2e140]
8: (MonClient::get_monmap_and_config()+0x5f) [0x7fd3cad365af]
9: (global_pre_init(std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > > const*, std::vector<char const*, std::allocator<char const*> >&, unsigned int, code_environment_t, int)+0x524) [0x55ce86711444]
10: (global_init(std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > > const*, std::vector<char const*, std::allocator<char const*> >&, unsigned int, code_environment_t, int, char const*, bool)+0x76) [0x55ce86711b56]
11: (main()+0x136) [0x55ce864ff9a6]
12: (__libc_start_main()+0xf5) [0x7fd3c6e73555]
13: (()+0xfc010) [0x55ce86505010]

The workaround is to start the managed cluster deployment from scratch.

[7073] Cannot automatically remove a Ceph node

Fixed in 2.16.0

When removing a worker node, it is not possible to automatically remove a Ceph node. The workaround is to manually remove the Ceph node from the Ceph cluster as described in Operations Guide: Add, remove, or reconfigure Ceph nodes before removing the worker node from your deployment.


Container Cloud web UI
[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Addressed issues

In the Mirantis Container Cloud release 2.1.0, the following issues have been addressed:

  • [7281] Fixed the issue with a management cluster bootstrap script failing if there was a space in the PATH environment variable.

  • [7205] Fixed the issue with some cluster objects being stuck during deletion of an AWS-based managed cluster due to unresolved VPC dependencies.

  • [7304] Fixed the issue with failure to reattach a Mirantis Kubernetes Engine (MKE) cluster with the same name.

  • [7101] Fixed the issue with the monitoring of Ceph and Ironic being enabled when Ceph and Ironic are disabled on the baremetal-based clusters.

  • [7324] Fixed the issue with the monitoring of Ceph being disabled on the baremetal-based managed clusters due to the missing provider: BareMetal parameter.

  • [7180] Fixed the issue with lcm-controller periodically failing with the invalid memory address or nil pointer dereference runtime error.

  • [7251] Fixed the issue with setting up the OIDC integration on the MKE side.

  • [7326] Fixed the issue with the missing entry for the host itself in etc/hosts causing failure of services that require node FQDN.

  • [6989] Fixed the issue with baremetal-operator ignoring the clean failed provisioning state if a node fails to deploy on a baremetal-based managed cluster.

  • [7231] Fixed the issue with the baremetal-provider pod not restarting after the ConfigMap changes and causing the telemeter-client pod to fail during deployment.

Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.1.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Container Cloud release components versions

Component

Application/Service

Version

AWS Updated

aws-provider

1.12.2

aws-credentials-controller

1.12.2

Bare metal

baremetal-operator Updated

3.1.0

baremetal-public-api New

3.1.0

baremetal-provider Updated

1.12.2

httpd Updated

2.4.46-20201001171500

ironic

train-bionic-20200803180020

ironic-operator

base-bionic-20200805144858

kaas-ipam Updated

20201007180518

local-volume-provisioner

1.0.4-mcp

mariadb Updated

10.4.14-bionic-20200812025059

IAM

iam Updated

1.1.18

iam-controller Updated

1.12.2

keycloak

9.0.0

Container Cloud Updated

admission-controller

1.12.3

byo-credentials-controller

1.12.2

byo-provider

1.12.2

kaas-public-api

1.12.2

kaas-exporter

1.12.2

kaas-ui

1.12.2

lcm-controller

0.2.0-169-g5668304d

release-controller

1.12.2

OpenStack Updated

openstack-provider

1.12.2

os-credentials-controller

1.12.2

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.1.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Binaries

Target system image (ubuntu-bionic)

https://binary.mirantis.com/bm/bin/efi/ubuntu/qcow2-bionic-debug-20200730084816

baremetal-operator Updated

https://binary.mirantis.com/bm/helm/baremetal-operator-3.1.0.tgz

baremetal-public-api New

https://binary.mirantis.com/bm/helm/baremetal-public-api-3.1.0.tgz

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-train-bionic-debug-20200730084816

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-train-bionic-debug-20200730084816

kaas-ipam Updated

https://binary.mirantis.com/bm/helm/kaas-ipam-3.1.0.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-1.0.4-mcp.tgz

Docker images

baremetal-operator Updated

mirantis.azurecr.io/bm/baremetal-operator:base-bionic-20201005150946

httpd Updated

mirantis.azurecr.io/bm/external/httpd:2.4.46-20201001171500

ironic

mirantis.azurecr.io/openstack/ironic:train-bionic-20200803180020

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:train-bionic-20200803180020

ironic-operator

mirantis.azurecr.io/bm/ironic-operator:base-bionic-20200805144858

kaas-ipam Updated

mirantis.azurecr.io/bm/kaas-ipam:base-bionic-20201007180518

mariadb Updated

mirantis.azurecr.io/general/mariadb:10.4.14-bionic-20200812025059


Core artifacts

Artifact

Component

Path

Bootstrap tarball Updated

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.12.2.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.12.2.tar.gz

Helm charts Updated

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.12.3.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.12.2.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.12.2.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.12.2.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.12.2.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.12.2.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.12.2.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.12.2.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.12.2.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.12.2.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.12.2.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.12.2.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.12.2.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.12.2.tgz

Docker images for Container Cloud deployment Updated

admission-controller

mirantis.azurecr.io/core/admission-controller:1.12.3

aws-cluster-api-controller

mirantis.azurecr.io/core/aws-cluster-api-controller:1.12.2

byo-cluster-api-controller

mirantis.azurecr.io/core/byo-cluster-api-controller:1.12.2

aws-credentials-controller

mirantis.azurecr.io/core/aws-credentials-controller:1.12.2

byo-credentials-controller

mirantis.azurecr.io/core/byo-credentials-controller:1.12.2

cluster-api-provider-baremetal

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.12.2

frontend

mirantis.azurecr.io/core/frontend:1.12.2

iam-controller

mirantis.azurecr.io/core/iam-controller:1.12.2

lcm-controller

mirantis.azurecr.io/core/lcm-controller:v0.2.0-169-g5668304d

openstack-cluster-api-controller

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.12.2

os-credentials-controller

mirantis.azurecr.io/core/os-credentials-controller:1.12.2

release-controller

mirantis.azurecr.io/core/release-controller:1.12.2


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-236-9cea809-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-236-9cea809-darwin

iamctl-linux

http://binary.mirantis.com/iam/bin/iamctl-0.3.18-linux

iamctl-darwin

http://binary.mirantis.com/iam/bin/iamctl-0.3.18-darwin

iamctl-windows

http://binary.mirantis.com/iam/bin/iamctl-0.3.18-windows

Helm charts

iam Updated

http://binary.mirantis.com/iam/helm/iam-1.1.18.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

keycloak-proxy Updated

http://binary.mirantis.com/core/helm/keycloak_proxy-1.12.2.tgz

Docker images

api

mirantis.azurecr.io/iam/api:0.3.18

auxiliary

mirantis.azurecr.io/iam/auxiliary:0.3.18

kubernetes-entrypoint

mirantis.azurecr.io/iam/external/kubernetes-entrypoint:v0.3.1

mariadb

mirantis.azurecr.io/iam/external/mariadb:10.2.18

keycloak

mirantis.azurecr.io/iam/keycloak:0.3.18

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:6.0.1

Apply updates to the AWS-based management clusters

To complete the AWS-based management cluster upgrade to version 2.1.0, manually update the IAM policies for AWS before updating your AWS-based managed clusters.

To update the IAM policies for AWS:

  1. Choose from the following options:

    • Update the IAM policies using get_container_cloud.sh:

      1. On any local machine, download and run the latest version of the Container Cloud bootstrap script:

        wget https://binary.mirantis.com/releases/get_container_cloud.sh
        
        chmod 0755 get_container_cloud.sh
        
        ./get_container_cloud.sh
        
      2. Change the directory to the kaas-bootstrap folder created by the get_container_cloud.sh script.

      3. Export the following parameters by adding the corresponding values for the AWS admin credentials:

        export AWS_SECRET_ACCESS_KEY=XXXXXXX
        export AWS_ACCESS_KEY_ID=XXXXXXX
        export AWS_DEFAULT_REGION=us-east-2
        
      4. Update the AWS CloudFormation template for IAM policy:

        ./container-cloud bootstrap aws policy
        
    • Update the IAM policies using the AWS Management Console:

      1. Log in to your AWS Management Console.

      2. Verify that the controllers.cluster-api-provider-aws.kaas.mirantis.com role or another AWS role that you use for Container Cloud users contains the following permissions:

        "ec2:DescribeRegions", "ec2:DescribeInstanceTypes"
        

        Otherwise, add these permissions manually.

  2. Proceed to updating your AWS-based managed clusters as described in Operations Guide: Update a managed cluster.

2.0.0

This section outlines release notes for the initial Mirantis Container Cloud GA release 2.0.0. This release introduces support for the Cluster release 5.7.0 that is based on Mirantis Kubernetes Engine 3.3.3, Mirantis Container Runtime 19.03.12, and Kubernetes 1.18.

Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.0.0.


AWS
[8013] Managed cluster deployment requiring PVs may fail

Fixed in the Cluster release 7.0.0

Note

The issue below affects only the Kubernetes 1.18 deployments. Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

On a management cluster with multiple AWS-based managed clusters, some clusters fail to complete the deployments that require persistent volumes (PVs), for example, Elasticsearch. Some of the affected pods get stuck in the Pending state with the pod has unbound immediate PersistentVolumeClaims and node(s) had volume node affinity conflict errors.

Warning

The workaround below applies to HA deployments where data can be rebuilt from replicas. If you have a non-HA deployment, back up any existing data before proceeding, since all data will be lost while applying the workaround.

Workaround:

  1. Obtain the persistent volume claims related to the storage mounts of the affected pods:

    kubectl get pod/<pod_name1> pod/<pod_name2> \
    -o jsonpath='{.spec.volumes[?(@.persistentVolumeClaim)].persistentVolumeClaim.claimName}'
    

    Note

    In the command above and in the subsequent steps, substitute the parameters enclosed in angle brackets with the corresponding values.

  2. Delete the affected Pods and PersistentVolumeClaims to reschedule them: For example, for StackLight:

    kubectl -n stacklight delete \
    
      pod/<pod_name1> pod/<pod_name2> ...
      pvc/<pvc_name2> pvc/<pvc_name2> ...
    


Bare metal
[6988] LVM fails to deploy if the volume group name already exists

Fixed in Container Cloud 2.5.0

During a management or managed cluster deployment, LVM cannot be deployed on a new disk if an old volume group with the same name already exists on the target hardware node but on the different disk.

Workaround:

In the bare metal host profile specific to your hardware configuration, add the wipe: true parameter to the device that fails to be deployed. For the procedure details, see Operations Guide: Create a custom host profile.


IAM
[2757] IAM fails to start during management cluster deployment

Fixed in Container Cloud 2.4.0

During a management cluster deployment, IAM fails to start with the IAM pods being in the CrashLoopBackOff status.

Workaround:

  1. Log in to the bootstrap node.

  2. Remove the iam-mariadb-state configmap:

    kubectl delete cm -n kaas iam-mariadb-state
    
  3. Manually delete the mariadb pods:

    kubectl delete po -n kaas mariadb-server-{0,1,2}
    

    Wait for the pods to start. If the mariadb pod does not start with the connection to peer timed out exception, repeat the step 2.

  4. Obtain the MariaDB database admin password:

    kubectl get secrets -n kaas mariadb-dbadmin-password \
    -o jsonpath='{.data.MYSQL_DBADMIN_PASSWORD}' | base64 -d ; echo
    
  5. Log in to MariaDB:

    kubectl exec -it -n kaas mariadb-server-0 -- bash -c 'mysql -uroot -p<mysqlDbadminPassword>'
    

    Substitute <mysqlDbadminPassword> with the corresponding value obtained in the previous step.

  6. Run the following command:

    DROP DATABASE IF EXISTS keycloak;
    
  7. Manually delete the Keycloak pods:

    kubectl delete po -n kaas iam-keycloak-{0,1,2}
    

StackLight
[7101] Monitoring of disabled components

Fixed in 2.1.0

On the baremetal-based clusters, the monitoring of Ceph and Ironic is enabled when Ceph and Ironic are disabled. The issue with Ceph relates to both management or managed clusters, the issue with Ironic relates to managed clusters only.

Workaround:

  1. Open the StackLight configuration manifest as described in Operations Guide: Configure StackLight.

  2. Add the following parameter to the StackLight helmReleases values of the Cluster object to explicitly disable the required component monitoring:

    • For Ceph:

      helmReleases:
        - name: stacklight
          values:
            ...
            ceph:
              disabledOnBareMetal: true
            ...
      
    • For Ironic:

      helmReleases:
        - name: stacklight
          values:
            ...
            ironic:
              disabledOnBareMetal: true
            ...
      
[7324] Ceph monitoring disabled

Fixed in 2.1.0

Ceph monitoring may be disabled on the baremetal-based managed clusters due to a missing provider: BareMetal parameter.

Workaround:

  1. Open the StackLight configuration manifest as described in Operations Guide: Configure StackLight.

  2. Add the provider: BareMetal parameter to the StackLight helmReleases values of the Cluster object:

    spec:
      providerSpec:
        value:
          helmReleases:
          - name: stacklight
            values:
              ...
              provider: BareMetal
              ...
    

Storage
[6164] Small number of PGs per Ceph OSD

Fixed in 2.2.0

After deploying a managed cluster with Ceph, the number of placement groups (PGs) per Ceph OSD may be too small and the Ceph cluster may have the HEALTH_WARN status:

health: HEALTH_WARN
        too few PGs per OSD (3 < min 30)

The workaround is to enable the PG balancer to properly manage the number of PGs:

kexec -it $(k get pod -l "app=rook-ceph-tools" --all-namespaces -o jsonpath='{.items[0].metadata.name}') -n rook-ceph bash
ceph mgr module enable pg_autoscaler
[7131] rook-ceph-mgr fails during managed cluster deployment

Fixed in 2.2.0

Occasionally, the deployment of a managed cluster may fail during the Ceph Monitor or Manager deployment. In this case, the Ceph cluster may be down and and a stack trace similar to the following one may be present in Ceph Manager logs:

kubectl -n rook-ceph logs rook-ceph-mgr-a-c5dc846f8-k68rs

/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.9/rpm/el7/BUILD/ceph-14.2.9/src/mon/MonMap.h: In function 'void MonMap::add(const mon_info_t&)' thread 7fd3d3744b80 time 2020-09-03 10:16:46.586388
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.9/rpm/el7/BUILD/ceph-14.2.9/src/mon/MonMap.h: 195: FAILED ceph_assert(addr_mons.count(a) == 0)
ceph version 14.2.9 (581f22da52345dba46ee232b73b990f06029a2a0) nautilus (stable)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14a) [0x7fd3ca9b2875]
2: (()+0x253a3d) [0x7fd3ca9b2a3d]
3: (MonMap::add(mon_info_t const&)+0x80) [0x7fd3cad49190]
4: (MonMap::add(std::string const&, entity_addrvec_t const&, int)+0x110) [0x7fd3cad493a0]
5: (MonMap::init_with_ips(std::string const&, bool, std::string const&)+0xc9) [0x7fd3cad43849]
6: (MonMap::build_initial(CephContext*, bool, std::ostream&)+0x314) [0x7fd3cad45af4]
7: (MonClient::build_initial_monmap()+0x130) [0x7fd3cad2e140]
8: (MonClient::get_monmap_and_config()+0x5f) [0x7fd3cad365af]
9: (global_pre_init(std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > > const*, std::vector<char const*, std::allocator<char const*> >&, unsigned int, code_environment_t, int)+0x524) [0x55ce86711444]
10: (global_init(std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > > const*, std::vector<char const*, std::allocator<char const*> >&, unsigned int, code_environment_t, int, char const*, bool)+0x76) [0x55ce86711b56]
11: (main()+0x136) [0x55ce864ff9a6]
12: (__libc_start_main()+0xf5) [0x7fd3c6e73555]
13: (()+0xfc010) [0x55ce86505010]

The workaround is to start the managed cluster deployment from scratch.

[7073] Cannot automatically remove a Ceph node

Fixed in 2.16.0

When removing a worker node, it is not possible to automatically remove a Ceph node. The workaround is to manually remove the Ceph node from the Ceph cluster as described in Operations Guide: Add, remove, or reconfigure Ceph nodes before removing the worker node from your deployment.


Bootstrap
[7281] Space in PATH causes failure of bootstrap process

Fixed in 2.1.0

A management cluster bootstrap script fails if there is a space in the PATH environment variable. As a workaround, before running the bootstrap.sh script, verify that there are no spaces in the PATH environment variable.


Container Cloud web UI
[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.


Components versions

The following table lists the major components and their versions of the Mirantis Container Cloud release 2.0.0.

Container Cloud release components versions

Component

Application/Service

Version

AWS

aws-provider

1.10.12

aws-credentials-controller

1.10.12

Bare metal

baremetal-operator

3.0.7

baremetal-provider

1.10.12

httpd

2.4.43-20200710111500

ironic

train-bionic-20200803180020

ironic-operator

base-bionic-20200805144858

kaas-ipam

20200807130953

local-volume-provisioner

1.0.4-mcp

mariadb

10.4.12-bionic-20200803130834

IAM

iam

1.1.16

iam-controller

1.10.12

keycloak

9.0.0

Container Cloud

admission-controller

1.10.12

byo-credentials-controller

1.10.12

byo-provider

1.10.12

kaas-public-api

1.10.12

kaas-exporter

1.10.12

kaas-ui

1.10.12

lcm-controller

0.2.0-149-g412c5a05

release-controller

1.10.12

OpenStack

openstack-provider

1.10.12

os-credentials-controller

1.10.12

Artifacts

This section lists the components artifacts of the Mirantis Container Cloud release 2.0.0.


Bare metal artifacts

Artifact

Component

Path

Binaries

Target system image (ubuntu-bionic)

https://binary.mirantis.com/bm/bin/efi/ubuntu/qcow2-bionic-debug-20200730084816

baremetal-operator

https://binary.mirantis.com/bm/helm/baremetal-operator-3.0.7.tgz

ironic-python-agent.kernel

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/kernel-train-bionic-debug-20200730084816

ironic-python-agent.initramfs

https://binary.mirantis.com/bm/bin/ironic/ipa/ubuntu/initramfs-train-bionic-debug-20200730084816

kaas-ipam

https://binary.mirantis.com/bm/helm/kaas-ipam-3.0.7.tgz

local-volume-provisioner

https://binary.mirantis.com/bm/helm/local-volume-provisioner-1.0.4-mcp.tgz

Docker images

baremetal-operator

mirantis.azurecr.io/bm/baremetal-operator:base-bionic-20200812172956

httpd

mirantis.azurecr.io/bm/external/httpd:2.4.43-20200710111500

ironic

mirantis.azurecr.io/openstack/ironic:train-bionic-20200803180020

ironic-inspector

mirantis.azurecr.io/openstack/ironic-inspector:train-bionic-20200803180020

ironic-operator

mirantis.azurecr.io/bm/ironic-operator:base-bionic-20200805144858

kaas-ipam

mirantis.azurecr.io/bm/kaas-ipam:base-bionic-20200807130953

mariadb

mirantis.azurecr.io/general/mariadb:10.4.12-bionic-20200803130834


Core components artifacts

Artifact

Component

Path

Bootstrap tarball

bootstrap-linux

https://binary.mirantis.com/core/bin/bootstrap-linux-1.10.12.tar.gz

bootstrap-darwin

https://binary.mirantis.com/core/bin/bootstrap-darwin-1.10.12.tar.gz

Helm charts

admission-controller

https://binary.mirantis.com/core/helm/admission-controller-1.10.12.tgz

aws-credentials-controller

https://binary.mirantis.com/core/helm/aws-credentials-controller-1.10.12.tgz

aws-provider

https://binary.mirantis.com/core/helm/aws-provider-1.10.12.tgz

baremetal-provider

https://binary.mirantis.com/core/helm/baremetal-provider-1.10.12.tgz

byo-credentials-controller

https://binary.mirantis.com/core/helm/byo-credentials-controller-1.10.12.tgz

byo-provider

https://binary.mirantis.com/core/helm/byo-provider-1.10.12.tgz

iam-controller

https://binary.mirantis.com/core/helm/iam-controller-1.10.12.tgz

kaas-exporter

https://binary.mirantis.com/core/helm/kaas-exporter-1.10.12.tgz

kaas-public-api

https://binary.mirantis.com/core/helm/kaas-public-api-1.10.12.tgz

kaas-ui

https://binary.mirantis.com/core/helm/kaas-ui-1.10.12.tgz

lcm-controller

https://binary.mirantis.com/core/helm/lcm-controller-1.10.12.tgz

openstack-provider

https://binary.mirantis.com/core/helm/openstack-provider-1.10.12.tgz

os-credentials-controller

https://binary.mirantis.com/core/helm/os-credentials-controller-1.10.12.tgz

release-controller

https://binary.mirantis.com/core/helm/release-controller-1.10.12.tgz

Docker images for Container Cloud deployment

aws-cluster-api-controller

mirantis.azurecr.io/core/aws-cluster-api-controller:1.10.12

aws-credentials-controller

mirantis.azurecr.io/core/aws-credentials-controller:1.10.12

byo-cluster-api-controller

mirantis.azurecr.io/core/byo-cluster-api-controller:1.10.12

byo-credentials-controller

mirantis.azurecr.io/core/byo-credentials-controller:1.10.12

cluster-api-provider-baremetal

mirantis.azurecr.io/core/cluster-api-provider-baremetal:1.10.12

frontend

mirantis.azurecr.io/core/frontend:1.10.12

iam-controller

mirantis.azurecr.io/core/iam-controller:1.10.12

lcm-controller

mirantis.azurecr.io/core/lcm-controller:v0.2.0-149-g412c5a05

openstack-cluster-api-controller

mirantis.azurecr.io/core/openstack-cluster-api-controller:1.10.12

os-credentials-controller

mirantis.azurecr.io/core/os-credentials-controller:1.10.12

release-controller

mirantis.azurecr.io/core/release-controller:1.10.12


IAM artifacts

Artifact

Component

Path

Binaries

hash-generate-linux

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-236-9cea809-linux

hash-generate-darwin

http://binary.mirantis.com/iam/bin/hash-generate-0.0.1-236-9cea809-darwin

iamctl-linux

http://binary.mirantis.com/iam/bin/iamctl-0.3.18-linux

iamctl-darwin

http://binary.mirantis.com/iam/bin/iamctl-0.3.18-darwin

iamctl-windows

http://binary.mirantis.com/iam/bin/iamctl-0.3.18-windows

Helm charts

iam

http://binary.mirantis.com/iam/helm/iam-1.1.16.tgz

iam-proxy

http://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

keycloak-proxy

http://binary.mirantis.com/core/helm/keycloak_proxy-1.10.12.tgz

Docker images

api

mirantis.azurecr.io/iam/api:0.3.18

auxiliary

mirantis.azurecr.io/iam/auxiliary:0.3.18

kubernetes-entrypoint

mirantis.azurecr.io/iam/external/kubernetes-entrypoint:v0.3.1

mariadb

mirantis.azurecr.io/iam/external/mariadb:10.2.18

keycloak

mirantis.azurecr.io/iam/keycloak:0.3.18

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:6.0.1

Cluster releases (managed)

This section outlines the release notes for major and patch Cluster releases that are supported by specific Container Cloud releases. For details about the Container Cloud releases, see: Container Cloud releases.

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

17.x series (current)

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

This section outlines release notes for supported major and patch Cluster releases of the 17.x series dedicated for Mirantis OpenStack for Kubernetes (MOSK).

17.4.x series

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

This section outlines release notes for major and patch Cluster releases of the 17.4.x series dedicated for Mirantis OpenStack for Kubernetes (MOSK).

17.4.0

This section outlines release notes for the major Cluster release 17.4.0 that is introduced in the Container Cloud release 2.29.0. This Cluster release is based on the Cluster release 16.4.0. The Cluster release 17.4.0 supports:

For the list of known and addressed issues, refer to the Container Cloud release 2.29.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 17.4.0 that is introduced in the Container Cloud release 2.29.0. For MOSK enhancements, see MOSK 25.1: New features.

Support for MKE 3.7.19 and MCR 25.0.8

Introduced support for Mirantis Container Runtime (MCR) 25.0.8 and Mirantis Kubernetes Engine (MKE) 3.7.19 that includes Kubernetes 1.27.16.

On existing clusters, MKE and MCR are updated to the latest supported version when you update your managed cluster to the Cluster release 17.4.0.

Improvements in the CIS Benchmark compliance for Ubuntu, MKE, and Docker

Added the following improvments in the CIS Benchmark compliance for Ubuntu, MKE, and Docker:

  • Introduced new password policies for local (Linux) user accounts. These policies match the rules described in CIS Benchmark compliance checks (executed by the Nessus scanner) for Ubuntu Linux 22.04 LTS v2.0.0 L1 Server, revision 1.1.

    The rules are applied automatically to all cluster nodes during cluster update. Therefore, if you use custom Linux accounts protected by passwords, pay attention to the following rules, as you may be forced to update uncompliant password during login:

    • Password expiration interval: 365 days

    • Minimum password length: 14 symbols

    • Required symbols are capital letters, lower case letters, and digits

    • At least 2 characters of the new password must not be present in the old password

    • Maximum identical consecutive characters: 3 (allowed: aaa123, not allowed: aaaa123)

    • Maximum sequential characters: 3 (allowed: abc1xyz, not allowed: abcd123)

    • Dictionary check is enabled

    • You must not reuse old password

    • After 3 failed password input attempts, the account is disabled for 15 minutes

  • Analyzed and reached 87% of pass rate in the CIS Benchmark compliance checks (executed by the Nessus scanner) for Ubuntu Linux 22.04 LTS v2.0.0 L1 Server, revision 1.1.

    Note

    Compliance results can vary between clusters due to configuration-dependent tests, such as server disk partitioning.

    If you require a detailed report of analyzed and fixed compliance checks, contact Mirantis support.

  • Analyzed and fixed the following checks (where possible, to reduce the number of failed components) in the Docker and MKE CIS benchmarks compliances:

    MKE

    Control ID

    Description

    5.1.3

    Minimize wildcard use in Roles and ClusterRoles: Over permissive access to resource types in Group

    5.2.8

    Minimize the admission of containers with added capabilities: Container with ANY capability

    5.7.3

    Apply Security Context to Your Pods and Containers: Policies - Defined Pods Security Context

    Docker

    Control ID

    Description

    5.26

    Ensure that container health is checked at runtime: No containers without health checks

    Note

    The control IDs may differ depending on the scanning tool.

    Note

    Some security scanners may produce false-negative results for some resources because native Docker containers and Kubernetes pods have different configuration mechanisms.

Components versions

The following table lists the components versions of the Cluster release 17.4.0. The components that are newly added, updated, deprecated, or removed as compared to 17.3.0, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration Updated

Mirantis Kubernetes Engine

3.7.19

Container runtime Updated

Mirantis Container Runtime

25.0.8

Core Updated

cinder-csi-plugin

1.27.2-26

client-certificate-controller

1.42.9

csi-attacher Deprecated

4.2.0-9

csi-node-driver-registrar Deprecated

2.7.0-9

csi-provisioner Deprecated

3.4.1-9

csi-resizer Deprecated

1.7.0-9

csi-snapshotter Deprecated

6.2.1-mcc-6

livenessprobe Deprecated

2.9.0-9

metrics-server

0.6.3-12

policy-controller

1.42.9

Distributed storage Updated

Ceph

18.2.4-12.cve (Reef)

Rook

1.14.10-26

LCM Updated

helm-controller

1.42.9

lcm-ansible

0.27.0-35-g95a1b94

lcm-agent

1.42.9

StackLight

Alerta Updated

9.0.4

Alertmanager

0.25.0

Alertmanager Webhook ServiceNow

0.1

Blackbox Exporter Updated

0.25.0

cAdvisor Updated

0.49.1

Elasticsearch Curator

5.7.6

Elasticsearch Exporter Updated

1.8.0

Fluentd Updated

1.18.0

Grafana Updated

11.2.6

kube-state-metrics Updated

2.13.0

Metric Collector

0.1

Metricbeat

7.12.1

Node Exporter

1.8.2

OAuth2 Proxy

7.1.3

OpenSearch Updated

2.17.1

OpenSearch Dashboards Updated

2.17.1

Prometheus Updated

3.0.1

Prometheus ES Exporter Updated

0.14.1

Prometheus MS Teams

1.5.2

Prometheus Patroni Exporter

0.0.1

Prometheus Postgres Exporter Updated

0.16.0

Prometheus Relay

0.4

sf-notifier

0.4

sf-reporter

0.1

Spilo Updated

13-3.3-p2

Telegraf

n/a Removed

1.33.2 Updated

Telemeter

4.4

Artifacts

This section lists the artifacts of components included in the Cluster release 17.4.0. The components that are newly added, updated, deprecated, or removed as compared to 17.3.0, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.42.9.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.42.9.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

metallb-controller Updated

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-96d6c3a2-amd64

metallb-speaker Updated

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-96d6c3a2-amd64

Ceph artifacts

Artifact

Component

Path

Helm charts

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.29.0-7.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-12.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.29.0-6

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.11.0-7.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.7.0-2.release

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v4.0.1-2.release

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.12.0-2.release

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.10.1-2.release

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v8.1.0-2.release

rook

mirantis.azurecr.io/ceph/rook:v1.14.10-26

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v8.1.0-2.release

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.42.9.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.42.9.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.42.9.tgz

openstack-cloud-controller-manager Deprecated

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.42.9.tgz

pause-update-controller New

https://binary.mirantis.com/core/helm/pause-update-controller-1.42.9.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.42.9.tgz

Docker images Updated

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-26

client-certificate-controller

mirantis.azurecr.io/core/client-certificate-controller:1.42.9

csi-attacher Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-9

csi-node-driver-registrar Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-9

csi-provisioner Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-9

csi-resizer Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-9

csi-snapshotter Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-8

livenessprobe Deprecated

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-9

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-12

openstack-cloud-controller-manager Deprecated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-26

pause-update-controller New

mirantis.azurecr.io/core/pause-update-controller:1.42.9

policy-controller

mirantis.azurecr.io/core/policy-controller:1.42.9

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-agent

mirantis.azurecr.io/core/bin/lcm-agent-1.42.9

lcm-ansible

mirantis.azurecr.io/lcm/bin/lcm-ansible/v0.27.0-35-g95a1b94/lcm-ansible.tar.gz

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.42.9.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.42.9.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/helm-controller:1.42.9

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.27.0-35-g95a1b94

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.27.0-35-g95a1b94

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-246.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-354.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-18.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards Updated

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-57.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-62.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-259.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.17.2.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20250217023014

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20250217023014

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20250214153133

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20250217023019

blackbox-exporter Updated

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20250207134744

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.49.1-20250217023014

configmap-reload Updated

mirantis.azurecr.io/stacklight/configmap-reload:v0.14.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter Updated

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.8.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.18-20250217023014

grafana Updated

mirantis.azurecr.io/stacklight/grafana:11.2.6

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20250214112745

kube-state-metrics Updated

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.13.0

kubectl Removed

n/a

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20250217023013

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20250217023013

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.8.2

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-14

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2.17-20250217023013

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2.17-20250217023013

pgbouncer Removed

n/a

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v3.0.1

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20250217023019

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20250217023014

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20250217023013

prometheus-postgres-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.16.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20250207134705

psql-client Removed

n/a

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20250217023014

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20250214110717

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-3.3-p2-20250217081737

telegraf

n/a Removed

mirantis.azurecr.io/stacklight/telegraf:1-20250217113112 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20250217023013

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20250217023013

System and MCR artifacts
17.3.x series

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

This section outlines release notes for supported major and patch Cluster releases of the 17.3.x series dedicated for Mirantis OpenStack for Kubernetes (MOSK).

17.3.7

This section includes release notes for the patch Cluster release 17.3.7 that is introduced in the Container Cloud patch release 2.29.2 and is based on the previous Cluster releases of the 17.3.x series and on 16.3.7.

This patch Cluster release introduces MOSK 24.3.4 that is based on Mirantis Kubernetes Engine 3.7.20 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.15 with docker-ee-cli updated to 23.0.17.

This section lists the artifacts of components included in the Cluster release 17.3.7.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.41.32.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.41.32.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20250307062615

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-747c4ca9-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-747c4ca9-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.29.2-3.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-13.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.29.2-0

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-27.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-7.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-7.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-7.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-7.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-7.cve

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-29

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-7.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.41.32.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.41.32.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.41.32.tgz

openstack-cloud-controller-manager Deprecated

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.41.32.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.41.32.tgz

Docker images

cinder-csi-plugin Updated

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-28

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.41.32

csi-attacher Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-10

csi-node-driver-registrar Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-9

csi-provisioner Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-10

csi-resizer Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-10

csi-snapshotter Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-9

livenessprobe Deprecated

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-9

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-13

openstack-cloud-controller-manager Deprecated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-28

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.41.32

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-agent

mirantis.azurecr.io/core/bin/lcm-agent-1.41.32

lcm-ansible

mirantis.azurecr.io/lcm/bin/lcm-ansible/v0.26.0-115-g2cedbea/lcm-ansible.tar.gz

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.41.32.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.41.32.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/helm-controller:1.41.32

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.26.0-115-g2cedbea

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.26.0-115-g2cedbea

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-242.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-317.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-18.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-57.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-62.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.16.13.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20250414023017

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20250414023017

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20250411082209

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20250414023016

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20250305095821

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.49.1-20250414023016

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.18-20250414023016

grafana

mirantis.azurecr.io/stacklight/grafana:10.4.15

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20250411124723

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20250414023016

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20250414023016

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.8.2

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-15

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2.17-20250414023016

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2.17-20250414023017

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20250414023016

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20250414023016

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20250414023016

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20250304085150

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20250414023016

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20250411082229

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-3.3-p2-20250414023016

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20241202023010

mirantis.azurecr.io/stacklight/telegraf:1-20250411114904 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20250414023016

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20250414023016

System and MCR artifacts
16.x series (current)

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

This section outlines release notes for supported major and patch Cluster releases of the 16.x series.

16.4.x series

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

This section outlines release notes for supported major and patch Cluster releases of the 16.4.x series.

16.4.2

This section outlines release notes for the patch Cluster release 16.4.2 that is introduced in the Container Cloud release 2.29.2 and is based on 16.4.0 and 16.4.1.

The Cluster release 16.4.2 supports Mirantis Kubernetes Engine 3.7.20 with Kubernetes 1.27 and Mirantis Container Runtime 25.0.7 with docker-ee-cli updated to 25.0.9m1.

For details on patch release delivery, see Patch releases.

This section lists the artifacts of components included in the Cluster release 16.4.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.42.16.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.42.16.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20250307062615

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-747c4ca9-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-747c4ca9-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.29.0-14.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v18.2.4-14.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.29.0-13

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.11.0-9.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.7.0-3.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v4.0.1-3.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.12.0-3.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.10.1-3.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v8.1.0-3.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.14.10-29

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v8.1.0-3.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.42.16.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.42.16.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.42.16.tgz

openstack-cloud-controller-manager Deprecated

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.42.16.tgz

pause-update-controller

https://binary.mirantis.com/core/helm/pause-update-controller-1.42.16.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.42.16.tgz

Docker images

cinder-csi-plugin Updated

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-28

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.42.16

csi-attacher Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-10

csi-node-driver-registrar Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-9

csi-provisioner Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-10

csi-resizer Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-10

csi-snapshotter Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-9

livenessprobe Deprecated

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-9

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-13

openstack-cloud-controller-manager Deprecated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-28

pause-update-controller Updated

mirantis.azurecr.io/core/pause-update-controller:1.42.16

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.42.16

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-agent

mirantis.azurecr.io/core/bin/lcm-agent-1.42.16

lcm-ansible

mirantis.azurecr.io/lcm/bin/lcm-ansible/v0.27.0-40-g23e6f06/lcm-ansible.tar.gz

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.42.16.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.42.16.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/helm-controller:1.42.16

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.27.0-40-g23e6f06

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.27.0-40-g23e6f06

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-246.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-354.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-18.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-57.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-62.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-259.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.17.7.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20250414023017

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20250414023017

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20250411082209

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20250414023016

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20250305095821

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.49.1-20250414023016

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.14.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.8.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.18-20250414023016

grafana

mirantis.azurecr.io/stacklight/grafana:11.2.6

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20250411124723

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.13.0

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20250414023016

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20250414023016

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.8.2

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-15

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2.17-20250414023016

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2.17-20250414023017

prometheus

mirantis.azurecr.io/stacklight/prometheus:v3.2.1

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20250414023016

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20250414023016

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20250414023016

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.16.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20250304085150

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20250414023016

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20250411082229

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-3.3-p2-20250414023016

telegraf Updated

mirantis.azurecr.io/stacklight/telegraf:1-20250411114904

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20250414023016

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20250414023016

System and MCR artifacts
16.4.0

This section outlines release notes for the major Cluster release 16.4.0 that is introduced in the Container Cloud release 2.29.0. The Cluster release 16.4.0 supports:

  • Mirantis Kubernetes Engine (MKE) 3.7.19. For details, see MKE Release Notes.

  • Mirantis Container Runtime (MCR) 25.0.8. For details, see MCR Release Notes.

  • Kubernetes 1.27.

For the list of known and addressed issues, refer to the Container Cloud release 2.29.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 16.4.0 that is introduced in the Container Cloud release 2.29.0.

Support for MKE 3.7.19 and MCR 25.0.8

Introduced support for Mirantis Container Runtime (MCR) 25.0.8 and Mirantis Kubernetes Engine (MKE) 3.7.19 that includes Kubernetes 1.27.16.

On existing clusters, MKE and MCR are updated to the latest supported version when you update your managed cluster to the Cluster release 17.4.0.

Improvements in the CIS Benchmark compliance for Ubuntu, MKE, and Docker

Added the following improvments in the CIS Benchmark compliance for Ubuntu, MKE, and Docker:

  • Introduced new password policies for local (Linux) user accounts. These policies match the rules described in CIS Benchmark compliance checks (executed by the Nessus scanner) for Ubuntu Linux 22.04 LTS v2.0.0 L1 Server, revision 1.1.

    The rules are applied automatically to all cluster nodes during cluster update. Therefore, if you use custom Linux accounts protected by passwords, pay attention to the following rules, as you may be forced to update uncompliant password during login:

    • Password expiration interval: 365 days

    • Minimum password length: 14 symbols

    • Required symbols are capital letters, lower case letters, and digits

    • At least 2 characters of the new password must not be present in the old password

    • Maximum identical consecutive characters: 3 (allowed: aaa123, not allowed: aaaa123)

    • Maximum sequential characters: 3 (allowed: abc1xyz, not allowed: abcd123)

    • Dictionary check is enabled

    • You must not reuse old password

    • After 3 failed password input attempts, the account is disabled for 15 minutes

  • Analyzed and reached 87% of pass rate in the CIS Benchmark compliance checks (executed by the Nessus scanner) for Ubuntu Linux 22.04 LTS v2.0.0 L1 Server, revision 1.1.

    Note

    Compliance results can vary between clusters due to configuration-dependent tests, such as server disk partitioning.

    If you require a detailed report of analyzed and fixed compliance checks, contact Mirantis support.

  • Analyzed and fixed the following checks (where possible, to reduce the number of failed components) in the Docker and MKE CIS benchmarks compliances:

    MKE

    Control ID

    Description

    5.1.3

    Minimize wildcard use in Roles and ClusterRoles: Over permissive access to resource types in Group

    5.2.8

    Minimize the admission of containers with added capabilities: Container with ANY capability

    5.7.3

    Apply Security Context to Your Pods and Containers: Policies - Defined Pods Security Context

    Docker

    Control ID

    Description

    5.26

    Ensure that container health is checked at runtime: No containers without health checks

    Note

    The control IDs may differ depending on the scanning tool.

    Note

    Some security scanners may produce false-negative results for some resources because native Docker containers and Kubernetes pods have different configuration mechanisms.

Components versions

The following table lists the components versions of the Cluster release 16.4.0. The components that are newly added, updated, deprecated, or removed as compared to 16.3.0, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration Updated

Mirantis Kubernetes Engine

3.7.19

Container runtime Updated

Mirantis Container Runtime

25.0.8

Core Updated

cinder-csi-plugin

1.27.2-26

client-certificate-controller

1.42.9

csi-attacher Deprecated

4.2.0-9

csi-node-driver-registrar Deprecated

2.7.0-9

csi-provisioner Deprecated

3.4.1-9

csi-resizer Deprecated

1.7.0-9

csi-snapshotter Deprecated

6.2.1-mcc-6

livenessprobe Deprecated

2.9.0-9

metrics-server

0.6.3-12

policy-controller

1.42.9

Distributed storage Updated

Ceph

18.2.4-12.cve (Reef)

Rook

1.14.10-26

LCM Updated

helm-controller

1.42.9

lcm-ansible

0.27.0-35-g95a1b94

lcm-agent

1.42.9

StackLight

Alerta Updated

9.0.4

Alertmanager

0.25.0

Alertmanager Webhook ServiceNow

0.1

Blackbox Exporter Updated

0.25.0

cAdvisor Updated

0.49.1

Elasticsearch Curator

5.7.6

Elasticsearch Exporter Updated

1.8.0

Fluentd Updated

1.18.0

Grafana Updated

11.2.6

kube-state-metrics Updated

2.13.0

Metric Collector

0.1

Metricbeat

7.12.1

Node Exporter

1.8.2

OAuth2 Proxy

7.1.3

OpenSearch Updated

2.17.1

OpenSearch Dashboards Updated

2.17.1

Prometheus Updated

3.0.1

Prometheus ES Exporter Updated

0.14.1

Prometheus MS Teams

1.5.2

Prometheus Patroni Exporter

0.0.1

Prometheus Postgres Exporter Updated

0.16.0

Prometheus Relay

0.4

sf-notifier

0.4

sf-reporter

0.1

Spilo Updated

13-3.3-p2

Telegraf

n/a Removed

1.33.2 Updated

Telemeter

4.4

Artifacts

This section lists the artifacts of components included in the Cluster release 16.4.0. The components that are newly added, updated, deprecated, or removed as compared to 16.3.0, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.42.9.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.42.9.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

metallb-controller Updated

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-96d6c3a2-amd64

metallb-speaker Updated

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-96d6c3a2-amd64

Ceph artifacts

Artifact

Component

Path

Helm charts

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.29.0-7.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-12.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.29.0-6

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.11.0-7.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.7.0-2.release

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v4.0.1-2.release

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.12.0-2.release

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.10.1-2.release

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v8.1.0-2.release

rook

mirantis.azurecr.io/ceph/rook:v1.14.10-26

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v8.1.0-2.release

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.42.9.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.42.9.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.42.9.tgz

openstack-cloud-controller-manager Deprecated

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.42.9.tgz

pause-update-controller New

https://binary.mirantis.com/core/helm/pause-update-controller-1.42.9.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.42.9.tgz

Docker images Updated

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-26

client-certificate-controller

mirantis.azurecr.io/core/client-certificate-controller:1.42.9

csi-attacher Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-9

csi-node-driver-registrar Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-9

csi-provisioner Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-9

csi-resizer Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-9

csi-snapshotter Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-8

livenessprobe Deprecated

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-9

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-12

openstack-cloud-controller-manager Deprecated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-26

pause-update-controller New

mirantis.azurecr.io/core/pause-update-controller:1.42.9

policy-controller

mirantis.azurecr.io/core/policy-controller:1.42.9

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-agent

mirantis.azurecr.io/core/bin/lcm-agent-1.42.9

lcm-ansible

mirantis.azurecr.io/lcm/bin/lcm-ansible/v0.27.0-35-g95a1b94/lcm-ansible.tar.gz

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.42.9.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.42.9.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/helm-controller:1.42.9

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.27.0-35-g95a1b94

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.27.0-35-g95a1b94

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-246.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-354.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-18.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards Updated

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-57.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-62.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-259.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp Removed

n/a

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.17.2.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20250217023014

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20250217023014

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20250214153133

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20250217023019

blackbox-exporter Updated

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20250207134744

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.49.1-20250217023014

configmap-reload Updated

mirantis.azurecr.io/stacklight/configmap-reload:v0.14.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter Updated

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.8.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.18-20250217023014

grafana Updated

mirantis.azurecr.io/stacklight/grafana:11.2.6

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20250214112745

kube-state-metrics Updated

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.13.0

kubectl Removed

n/a

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20250217023013

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20250217023013

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.8.2

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-14

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2.17-20250217023013

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2.17-20250217023013

openstack-refapp Removed

n/a

pgbouncer Removed

n/a

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v3.0.1

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20250217023019

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20250217023014

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20250217023013

prometheus-postgres-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.16.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20250207134705

psql-client Removed

n/a

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20250217023014

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20250214110717

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-3.3-p2-20250217081737

telegraf

n/a Removed

mirantis.azurecr.io/stacklight/telegraf:1-20250217113112 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20250217023013

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20250217023013

System and MCR artifacts
16.3.x series

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

This section outlines release notes for supported major and patch Cluster releases of the 16.3.x series.

16.3.7

This section includes release notes for the patch Cluster release 16.3.7 that is introduced in the Container Cloud patch release 2.29.2 and is based on the previous Cluster releases of the 16.3.x series series.

This Cluster release supports Mirantis Kubernetes Engine 3.7.20 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.15 with docker-ee-cli updated to 23.0.17.

  • For the list of CVE fixes delivered with this patch Cluster release, see 2.29.2

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.3.7.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.41.32.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.41.32.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20250307062615

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-747c4ca9-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-747c4ca9-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.29.2-3.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-13.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.29.2-0

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-27.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-7.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-7.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-7.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-7.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-7.cve

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-29

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-7.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.41.32.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.41.32.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.41.32.tgz

openstack-cloud-controller-manager Deprecated

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.41.32.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.41.32.tgz

Docker images

cinder-csi-plugin Updated

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-28

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.41.32

csi-attacher Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-10

csi-node-driver-registrar Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-9

csi-provisioner Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-10

csi-resizer Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-10

csi-snapshotter Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-9

livenessprobe Deprecated

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-9

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-13

openstack-cloud-controller-manager Deprecated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-28

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.41.32

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-agent

mirantis.azurecr.io/core/bin/lcm-agent-1.41.32

lcm-ansible

mirantis.azurecr.io/lcm/bin/lcm-ansible/v0.26.0-115-g2cedbea/lcm-ansible.tar.gz

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.41.32.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.41.32.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/helm-controller:1.41.32

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.26.0-115-g2cedbea

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.26.0-115-g2cedbea

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-242.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-317.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-18.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-57.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-62.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.16.13.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20250414023017

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20250414023017

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20250411082209

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20250414023016

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20250305095821

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.49.1-20250414023016

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.18-20250414023016

grafana

mirantis.azurecr.io/stacklight/grafana:10.4.15

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20250411124723

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20250414023016

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20250414023016

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.8.2

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-15

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2.17-20250414023016

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2.17-20250414023017

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20250414023016

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20250414023016

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20250414023016

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20250304085150

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20250414023016

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20250411082229

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-3.3-p2-20250414023016

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20241202023010

mirantis.azurecr.io/stacklight/telegraf:1-20250411114904 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20250414023016

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20250414023016

System and MCR artifacts
Deprecated Cluster releases

This section describes the release notes for the deprecated major Cluster releases that will become unsupported in one of the following Container Cloud releases. Make sure to update your managed clusters to the latest supported version as described in Update a managed cluster.

17.3.6

This section includes release notes for the patch Cluster release 17.3.6 that is introduced in the Container Cloud patch release 2.29.1 and is based on the previous Cluster releases of the 17.3.x series and on 16.3.6.

This patch Cluster release introduces MOSK 24.3.3 that is based on Mirantis Kubernetes Engine 3.7.20 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.15 with docker-ee-cli updated to 23.0.17.

This section lists the artifacts of components included in the Cluster release 17.3.6.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.41.31.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.41.31.tgz

Docker images Updated

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20250307062615

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-747c4ca9-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-747c4ca9-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.29.1-1.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-13.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.29.1-0

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-27.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-7.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-7.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-7.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-7.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-7.cve

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-29

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-7.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.41.31.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.41.31.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.41.31.tgz

openstack-cloud-controller-manager Deprecated

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.41.31.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.41.31.tgz

Docker images Updated

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-27

client-certificate-controller

mirantis.azurecr.io/core/client-certificate-controller:1.41.31

csi-attacher Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-10

csi-node-driver-registrar Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-9

csi-provisioner Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-10

csi-resizer Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-10

csi-snapshotter Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-9

livenessprobe Deprecated

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-9

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-13

openstack-cloud-controller-manager Deprecated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-27

policy-controller

mirantis.azurecr.io/core/policy-controller:1.41.31

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-agent

mirantis.azurecr.io/core/bin/lcm-agent-1.41.31

lcm-ansible

mirantis.azurecr.io/lcm/bin/lcm-ansible/v0.26.0-114-gf1e92be/lcm-ansible.tar.gz

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.41.31.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.41.31.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/helm-controller:1.41.31

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.26.0-114-gf1e92be

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.26.0-114-gf1e92be

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-242.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-317.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-18.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-57.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-62.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.16.11.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20250317104943

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20250317023015

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20250317092402

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20250317023016

blackbox-exporter Updated

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20250305095821

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.49.1-20250317023016

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.18-20250317023010

grafana Updated

mirantis.azurecr.io/stacklight/grafana:10.4.15

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20250214112745

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20250317023016

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20250317023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.8.2

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-15

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2.17-20250317023016

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2.17-20250317023010

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20250317023015

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20250317023015

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20250317023016

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20250304085150

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20250317092322

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20250214110717

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-3.3-p2-20250317023016

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20241202023010

mirantis.azurecr.io/stacklight/telegraf:1-20241115074302

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20250317023016

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20250317023016

System and MCR artifacts
17.3.5

This section includes release notes for the patch Cluster release 17.3.5 that is introduced in the Container Cloud patch release 2.28.5 and is based on the Cluster releases 17.3.0, 17.3.4, and 16.3.4.

This patch Cluster release introduces MOSK 24.3.2 that is based on Mirantis Kubernetes Engine 3.7.18 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.15.

This section lists the artifacts of components included in the Cluster release 17.3.5.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.41.28.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.41.28.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-a68c7101-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-a68c7101-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.28.4-3.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-11.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.28.4-2

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-26.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-6.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-6.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-6.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-6.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-6.cve

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-28

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-6.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.41.28.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.41.28.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.41.28.tgz

openstack-cloud-controller-manager Deprecated

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.41.28.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.41.28.tgz

Docker images

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-24

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.41.28

csi-attacher Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-7

csi-node-driver-registrar Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-7

csi-provisioner Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-7

csi-resizer Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-7

csi-snapshotter Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-6

livenessprobe Deprecated

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-7

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-10

openstack-cloud-controller-manager Deprecated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-24

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.41.28

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-agent

mirantis.azurecr.io/core/bin/lcm-agent-1.41.28

lcm-ansible

mirantis.azurecr.io/lcm/bin/lcm-ansible/v0.26.0-112-g26f96e1/lcm-ansible.tar.gz

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.41.28.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.41.28.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/helm-controller:1.41.28

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.26.0-112-g26f96e1

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.26.0-112-g26f96e1

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-242.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-317.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-18.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-57.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-62.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.16.9.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20250113023008

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20250113023013

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20241022074315

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20250113023014

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20241217061716

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.49.1-20250113023012

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.18-20250114114547

grafana

mirantis.azurecr.io/stacklight/grafana:10.4.3

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20241115071117

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20250113023012

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20250113023013

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.8.2

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-13

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2.17-20250113023014

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2.17-20250113023013

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20250113023014

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20250113023013

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20250113023014

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240925023019

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20250113023013

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20241021111607

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-3.3-p2-20250113023013

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20241202023010

mirantis.azurecr.io/stacklight/telegraf:1-20241115074302

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20250113023014

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20250113023012

System and MCR artifacts
17.3.4

This section includes release notes for the patch Cluster release 17.3.4 that is introduced in the Container Cloud patch release 2.28.4 and is based on the Cluster releases 17.3.0 and 16.3.4.

This patch Cluster release introduces MOSK 24.3.1 that is based on Mirantis Kubernetes Engine 3.7.17 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.15, which includes containerd 1.6.36.

This section lists the artifacts of components included in the Cluster release 17.3.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.41.26.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.41.26.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

metallb-controller Updated

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-a68c7101-amd64

metallb-speaker Updated

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-a68c7101-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.28.4-3.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v18.2.4-11.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.28.4-2

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-26.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-6.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-6.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-6.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-6.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-6.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.13.5-28

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-6.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.41.26.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.41.26.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.41.26.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.41.26.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.41.26.tgz

Docker images

cinder-csi-plugin Updated

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-24

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.41.26

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-7

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-7

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-7

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-7

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-6

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-7

metrics-server Updated

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-10

openstack-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-24

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.41.26

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.26.0-111-g8632985/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.41.26

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.41.26.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.41.26.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.41.26

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.26.0-111-g8632985

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.26.0-111-g8632985

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-242.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-317.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-18.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards Updated

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-57.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-62.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.16.8.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20241216023012

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20241216023016

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20241022074315

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20241216023016

blackbox-exporter Updated

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20241217061716

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.49.1-20241216023012

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.17-20241216023012

grafana Updated

mirantis.azurecr.io/stacklight/grafana:10.4.3

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20241115071117

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Removed

n/a

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20241216023016

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20241217093320

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.8.2

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-13

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2.17-20241216023012

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2.17-20241216023012

pgbouncer Removed

n/a

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20241216023016

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20241209023016

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20241216023016

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240925023019

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20241216023012

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20241021111607

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-3.3-p2-20241216023012

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20241202023010 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20241115074302 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20241216023016

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20241216023012

System and MCR artifacts
17.3.0

This section outlines release notes for the major Cluster release 17.3.0 that is introduced in the Container Cloud release 2.28.0. This Cluster release is based on the Cluster release 16.3.0. The Cluster release 17.3.0 supports:

For the list of known and addressed issues, refer to the Container Cloud release 2.28.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 17.3.0 that is introduced in the Container Cloud release 2.28.0.

Support for MKE 3.7.12 and MCR 23.0.14

Introduced support for Mirantis Container Runtime (MCR) 23.0.14 and Mirantis Kubernetes Engine (MKE) 3.7.12 that includes Kubernetes 1.27.14.

On existing clusters, MKE and MCR are updated to the latest supported version when you update your managed cluster to the Cluster release 17.3.0.

Note

The 3.7.12 update applies to users who follow the update train using major releases. Users who install patch releases, have already obtained MKE 3.7.12 in Container Cloud 2.27.3 (Cluster release 17.1.4).

Improvements in the CIS Benchmark compliance for Ubuntu Linux

Analyzed and reached 80% of pass rate in the CIS Benchmark compliance checks (executed by the Nessus scanner) for Ubuntu Linux 22.04 LTS v2.0.0 L1 Server, revision 1.1.

Note

Compliance results can vary between clusters due to configuration-dependent tests, such as server disk partitioning.

If you require a detailed report of analyzed and fixed compliance checks, contact Mirantis support.

Monitoring of LCM issues

Implemented proactive monitoring that allows the operator to quickly detect and resolve LCM health issues in a cluster. The implementation includes the dedicated MCCClusterLCMUnhealthy alert along with the kaas_cluster_lcm_healthy and kaas_cluster_ready metrics that are collected on the kaas-exporter side.

Refactoring of StackLight expiration alerts

Refactored all certificate and license expiration alerts in StackLight that now display the exact number of remaining days before expiration using {{ $value | humanizeTimestamp }}. This optimization replaces vague wording such as less than 10 days, which indicated a range from 0 to 9 days before expiration.

Components versions

The following table lists the components versions of the Cluster release 17.3.0. The components that are newly added, updated, deprecated, or removed as compared to 17.2.0, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration Updated

Mirantis Kubernetes Engine

3.7.12 0

Container runtime Updated

Mirantis Container Runtime

23.0.14 1

Core Updated

cinder-csi-plugin

1.27.2-19

client-certificate-controller

1.41.14

csi-attacher

4.2.0-7

csi-node-driver-registrar

2.7.0-7

csi-provisioner

3.4.1-7

csi-resizer

1.7.0-7

csi-snapshotter

6.2.1-mcc-6

livenessprobe

2.9.0-7

metrics-server

0.6.3-9

policy-controller

1.41.14

Distributed storage Updated

Ceph

18.2.4-6.cve (Reef)

Rook

1.13.5-21

LCM Updated

helm-controller

1.41.14

lcm-ansible

0.26.0-95-g95f0130

lcm-agent

1.41.14

StackLight

Alerta

9.0.1

Alertmanager

0.25.0

Alertmanager Webhook ServiceNow

0.1

Blackbox Exporter

0.24.0

cAdvisor

0.47.2

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.5.0

Fluentd

1.15.3

Grafana Updated

10.4.3

kube-state-metrics

2.10.1

Metric Collector

0.1

Metricbeat

7.12.1

Node Exporter Updated

1.8.2

OAuth2 Proxy

7.1.3

OpenSearch

2.12.0

OpenSearch Dashboards

2.12.0

Prometheus

2.48.0

Prometheus ES Exporter

0.14.0

Prometheus MS Teams

1.5.2

Prometheus Patroni Exporter

0.0.1

Prometheus Postgres Exporter

0.15.0

Prometheus Relay

0.4

sf-notifier

0.4

sf-reporter

0.1

Spilo

13-2.1p9

Telegraf

1.9.1

1.30.2

Telemeter

4.4

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the artifacts of components included in the Cluster release 17.3.0. The components that are newly added, updated, deprecated, or removed as compared to 17.2.0, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.28.0-15.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-6.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.28.0-14

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-20.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-6.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-6.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-6.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-6.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-6.cve

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-21

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-6.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.41.14.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.41.14.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.41.14.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.41.14.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.41.14.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.41.14.tgz

Docker images Updated

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-20

client-certificate-controller

mirantis.azurecr.io/core/client-certificate-controller:1.41.14

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-7

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-7

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-7

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-7

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-6

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-7

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-9

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-20

policy-controller

mirantis.azurecr.io/core/policy-controller:1.41.14

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.26.0-95-g95f0130/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.41.14

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.41.14.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.41.14.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.41.14

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-240.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-305.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch Updated

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards Updated

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-56.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.16.2.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240828023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240828023017

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240828023020

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240828023011

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240828023010

grafana Updated

mirantis.azurecr.io/stacklight/grafana:10.4.3

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240828023017

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.27-20240828023016

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240828023014

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240626023010

node-exporter Updated

mirantis.azurecr.io/stacklight/node-exporter:v1.8.2

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-10

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240828023015

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240828023010

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240828023016

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240828023017

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240828023018

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240828023016

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20240701095027

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240828023015

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240828023010

stacklight-toolkit Removed

n/a

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240828023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240828023013

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240828023017

System and MCR artifacts
16.4.1

This section outlines release notes for the patch Cluster release 16.4.1 that is introduced in the Container Cloud release 2.29.1 and is based on 16.4.0.

The Cluster release 16.4.1 supports Mirantis Kubernetes Engine 3.7.20 with Kubernetes 1.27 and Mirantis Container Runtime 25.0.7 with docker-ee-cli updated to 25.0.9m1.

For details on patch release delivery, see Patch releases.

This section lists the artifacts of components included in the Cluster release 16.4.1.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.42.13.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.42.13.tgz

Docker images Updated

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20250307062615

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-747c4ca9-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-747c4ca9-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.29.0-12.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-13.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.29.0-11

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.11.0-8.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.7.0-3.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v4.0.1-3.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.12.0-3.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.10.1-3.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v8.1.0-3.cve

rook

mirantis.azurecr.io/ceph/rook:v1.14.10-28

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v8.1.0-3.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.42.13.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.42.13.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.42.13.tgz

openstack-cloud-controller-manager Deprecated

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.42.13.tgz

pause-update-controller

https://binary.mirantis.com/core/helm/pause-update-controller-1.42.13.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.42.13.tgz

Docker images Updated

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-27

client-certificate-controller

mirantis.azurecr.io/core/client-certificate-controller:1.42.13

csi-attacher Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-10

csi-node-driver-registrar Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-9

csi-provisioner Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-10

csi-resizer Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-10

csi-snapshotter Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-9

livenessprobe Deprecated

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-9

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-13

openstack-cloud-controller-manager Deprecated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-27

pause-update-controller

mirantis.azurecr.io/core/pause-update-controller:1.42.13

policy-controller

mirantis.azurecr.io/core/policy-controller:1.42.13

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-agent

mirantis.azurecr.io/core/bin/lcm-agent-1.42.13

lcm-ansible

mirantis.azurecr.io/lcm/bin/lcm-ansible/v0.27.0-39-g9b2f17a/lcm-ansible.tar.gz

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.42.13.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.42.13.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/helm-controller:1.42.13

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.27.0-39-g9b2f17a

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.27.0-39-g9b2f17a

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-246.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-354.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-18.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-57.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-62.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-259.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.17.4.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20250317104943

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20250317023015

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20250317092402

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20250317023016

blackbox-exporter Updated

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20250305095821

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.49.1-20250317023016

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.14.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.8.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.18-20250317023010

grafana

mirantis.azurecr.io/stacklight/grafana:11.2.6

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20250214112745

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.13.0

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20250317023016

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20250317023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.8.2

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-15

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2.17-20250317023016

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2.17-20250317023010

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v3.2.1

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20250317023015

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20250317023015

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20250317023016

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.16.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20250304085150

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20250317092322

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20250214110717

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-3.3-p2-20250317023016

telegraf

mirantis.azurecr.io/stacklight/telegraf:1-20250217113112

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20250317023016

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20250317023016

System and MCR artifacts
16.3.6

This section includes release notes for the patch Cluster release 16.3.6 that is introduced in the Container Cloud patch release 2.29.1 and is based on the previous Cluster releases of the 16.3.x series series.

This Cluster release supports Mirantis Kubernetes Engine 3.7.20 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.15 with docker-ee-cli updated to 23.0.17.

  • For the list of CVE fixes delivered with this patch Cluster release, see 2.29.1

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.3.6.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.41.31.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.41.31.tgz

Docker images Updated

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20250307062615

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-747c4ca9-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-747c4ca9-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.29.1-1.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-13.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.29.1-0

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-27.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-7.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-7.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-7.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-7.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-7.cve

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-29

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-7.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.41.31.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.41.31.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.41.31.tgz

openstack-cloud-controller-manager Deprecated

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.41.31.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.41.31.tgz

Docker images Updated

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-27

client-certificate-controller

mirantis.azurecr.io/core/client-certificate-controller:1.41.31

csi-attacher Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-10

csi-node-driver-registrar Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-9

csi-provisioner Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-10

csi-resizer Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-10

csi-snapshotter Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-9

livenessprobe Deprecated

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-9

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-13

openstack-cloud-controller-manager Deprecated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-27

policy-controller

mirantis.azurecr.io/core/policy-controller:1.41.31

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-agent

mirantis.azurecr.io/core/bin/lcm-agent-1.41.31

lcm-ansible

mirantis.azurecr.io/lcm/bin/lcm-ansible/v0.26.0-114-gf1e92be/lcm-ansible.tar.gz

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.41.31.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.41.31.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/helm-controller:1.41.31

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.26.0-114-gf1e92be

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.26.0-114-gf1e92be

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-242.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-317.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-18.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-57.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-62.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.16.11.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20250317104943

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20250317023015

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20250317092402

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20250317023016

blackbox-exporter Updated

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20250305095821

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.49.1-20250317023016

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.18-20250317023010

grafana Updated

mirantis.azurecr.io/stacklight/grafana:10.4.15

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20250214112745

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20250317023016

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20250317023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.8.2

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-15

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2.17-20250317023016

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2.17-20250317023010

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20250317023015

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20250317023015

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20250317023016

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20250304085150

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20250317092322

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20250214110717

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-3.3-p2-20250317023016

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20241202023010

mirantis.azurecr.io/stacklight/telegraf:1-20241115074302

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20250317023016

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20250317023016

System and MCR artifacts
16.3.5

This section includes release notes for the patch Cluster release 16.3.5 that is introduced in the Container Cloud patch release 2.28.5 and is based on the previous Cluster releases of the 16.3.x series series.

This Cluster release supports Mirantis Kubernetes Engine 3.7.18 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.15.

  • For the list of CVE fixes delivered with this patch Cluster release, see 2.28.5

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.3.5.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.41.28.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.41.28.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-a68c7101-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-a68c7101-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.28.4-3.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-11.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.28.4-2

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-26.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-6.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-6.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-6.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-6.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-6.cve

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-28

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-6.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.41.28.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.41.28.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.41.28.tgz

openstack-cloud-controller-manager Deprecated

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.41.28.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.41.28.tgz

Docker images

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-24

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.41.28

csi-attacher Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-7

csi-node-driver-registrar Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-7

csi-provisioner Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-7

csi-resizer Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-7

csi-snapshotter Deprecated

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-6

livenessprobe Deprecated

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-7

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-10

openstack-cloud-controller-manager Deprecated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-24

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.41.28

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-agent

mirantis.azurecr.io/core/bin/lcm-agent-1.41.28

lcm-ansible

mirantis.azurecr.io/lcm/bin/lcm-ansible/v0.26.0-112-g26f96e1/lcm-ansible.tar.gz

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.41.28.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.41.28.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/helm-controller:1.41.28

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.26.0-112-g26f96e1

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.26.0-112-g26f96e1

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-242.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-317.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-18.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-57.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-62.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.16.9.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20250113023008

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20250113023013

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20241022074315

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20250113023014

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20241217061716

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.49.1-20250113023012

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.18-20250114114547

grafana

mirantis.azurecr.io/stacklight/grafana:10.4.3

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20241115071117

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20250113023012

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20250113023013

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.8.2

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-13

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2.17-20250113023014

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2.17-20250113023013

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20250113023014

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20250113023013

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20250113023014

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240925023019

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20250113023013

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20241021111607

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-3.3-p2-20250113023013

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20241202023010

mirantis.azurecr.io/stacklight/telegraf:1-20241115074302

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20250113023014

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20250113023012

System and MCR artifacts
16.3.4

This section includes release notes for the patch Cluster release 16.3.4 that is introduced in the Container Cloud patch release 2.28.4 and is based on the previous Cluster releases of the 16.3.x series series.

This Cluster release supports Mirantis Kubernetes Engine 3.7.17 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.15, which includes containerd 1.6.36.

  • For the list of CVE fixes delivered with this patch Cluster release, see 2.28.4

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.3.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.41.26.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.41.26.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-a68c7101-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-a68c7101-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.28.4-3.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v18.2.4-11.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.28.4-2

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-26.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-6.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-6.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-6.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-6.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-6.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.13.5-28

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-6.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.41.26.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.41.26.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.41.26.tgz

openstack-cloud-controller-manager Deprecated

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.41.26.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.41.26.tgz

Docker images

cinder-csi-plugin Updated

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-24

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.41.26

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-7

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-7

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-7

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-7

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-6

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-7

metrics-server Updated

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-10

openstack-cloud-controller-manager Deprecated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-24

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.41.26

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.26.0-111-g8632985/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.41.26

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.41.26.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.41.26.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.41.26

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.26.0-111-g8632985

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.26.0-111-g8632985

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-242.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-317.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-18.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-57.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-62.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.16.8.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20241216023012

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20241216023016

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20241022074315

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20241216023016

blackbox-exporter Updated

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20241217061716

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.49.1-20241216023012

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.17-20241216023012

grafana

mirantis.azurecr.io/stacklight/grafana:10.4.3

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20241115071117

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Removed

n/a

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20241216023016

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20241217093320

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.8.2

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-13

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2.17-20241216023012

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2.17-20241216023012

pgbouncer Removed

n/a

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20241216023016

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20241209023016

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20241216023016

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240925023019

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20241216023012

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20241021111607

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-3.3-p2-20241216023012

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20241202023010 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20241115074302

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20241216023016

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20241216023012

System and MCR artifacts
16.3.0

This section outlines release notes for the major Cluster release 16.3.0 that is introduced in the Container Cloud release 2.28.0. The Cluster release 16.3.0 supports:

  • Mirantis Kubernetes Engine (MKE) 3.7.12. For details, see MKE Release Notes.

  • Mirantis Container Runtime (MCR) 23.0.14. For details, see MCR Release Notes.

  • Kubernetes 1.27.

For the list of known and addressed issues, refer to the Container Cloud release 2.28.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 16.3.0 that is introduced in the Container Cloud release 2.28.0.

Support for MKE 3.7.12 and MCR 23.0.14

Introduced support for Mirantis Container Runtime (MCR) 23.0.14 and Mirantis Kubernetes Engine (MKE) 3.7.12 that includes Kubernetes 1.27.14 for the Container Cloud management and managed clusters.

On existing managed clusters, MKE and MCR are updated to the latest supported version when you update your managed cluster to the Cluster release 16.3.0.

Note

The 3.7.12 update applies to users who follow the update train using major releases. Users who install patch releases, have already obtained MKE 3.7.12 in Container Cloud 2.27.3 (Cluster release 16.1.4).

Improvements in the CIS Benchmark compliance for Ubuntu Linux

Analyzed and reached 80% of pass rate in the CIS Benchmark compliance checks (executed by the Nessus scanner) for Ubuntu Linux 22.04 LTS v2.0.0 L1 Server, revision 1.1.

Note

Compliance results can vary between clusters due to configuration-dependent tests, such as server disk partitioning.

If you require a detailed report of analyzed and fixed compliance checks, contact Mirantis support.

Monitoring of LCM issues

Implemented proactive monitoring that allows the operator to quickly detect and resolve LCM health issues in a cluster. The implementation includes the dedicated MCCClusterLCMUnhealthy alert along with the kaas_cluster_lcm_healthy and kaas_cluster_ready metrics that are collected on the kaas-exporter side.

Refactoring of StackLight expiration alerts

Refactored all certificate and license expiration alerts in StackLight that now display the exact number of remaining days before expiration using {{ $value | humanizeTimestamp }}. This optimization replaces vague wording such as less than 10 days, which indicated a range from 0 to 9 days before expiration.

Components versions

The following table lists the components versions of the Cluster release 16.3.0. The components that are newly added, updated, deprecated, or removed as compared to 16.2.0, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration Updated

Mirantis Kubernetes Engine

3.7.12 0

Container runtime Updated

Mirantis Container Runtime

23.0.14 1

Core Updated

cinder-csi-plugin

1.27.2-19

client-certificate-controller

1.41.14

csi-attacher

4.2.0-7

csi-node-driver-registrar

2.7.0-7

csi-provisioner

3.4.1-7

csi-resizer

1.7.0-7

csi-snapshotter

6.2.1-mcc-6

livenessprobe

2.9.0-7

metrics-server

0.6.3-9

policy-controller

1.41.14

vsphere-cloud-controller-manager Removed

n/a

vsphere-csi-driver Removed

n/a

vsphere-csi-syncer Removed

n/a

Distributed storage Updated

Ceph

18.2.4-6.cve (Reef)

Rook

1.13.5-21

LCM Updated

helm-controller

1.41.14

lcm-ansible

0.26.0-95-g95f0130

lcm-agent

1.41.14

StackLight

Alerta

9.0.1

Alertmanager

0.25.0

Alertmanager Webhook ServiceNow

0.1

Blackbox Exporter

0.24.0

cAdvisor

0.47.2

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.5.0

Fluentd

1.15.3

Grafana Updated

10.4.3

kube-state-metrics

2.10.1

Metric Collector

0.1

Metricbeat

7.12.1

Node Exporter Updated

1.8.2

OAuth2 Proxy

7.1.3

OpenSearch

2.12.0

OpenSearch Dashboards

2.12.0

Prometheus

2.48.0

Prometheus ES Exporter

0.14.0

Prometheus MS Teams

1.5.2

Prometheus Patroni Exporter

0.0.1

Prometheus Postgres Exporter

0.15.0

Prometheus Relay

0.4

sf-notifier

0.4

sf-reporter

0.1

Spilo

13-2.1p9

Telegraf

1.9.1

1.30.2

Telemeter

4.4

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the artifacts of components included in the Cluster release 16.3.0. The components that are newly added, updated, deprecated, or removed as compared to 16.2.0, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.28.0-15.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-6.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.28.0-14

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-20.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-6.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-6.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-6.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-6.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-6.cve

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-21

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-6.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.41.14.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.41.14.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.41.14.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.41.14.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.41.14.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.41.14.tgz

vsphere-cloud-controller-manager Removed

n/a

vsphere-csi-plugin Removed

n/a

Docker images Updated

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-20

client-certificate-controller

mirantis.azurecr.io/core/client-certificate-controller:1.41.14

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-7

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-7

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-7

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-7

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-6

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-7

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-9

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-20

policy-controller

mirantis.azurecr.io/core/policy-controller:1.41.14

vsphere-cloud-controller-manager Removed

n/a

vsphere-csi-driver Removed

n/a

vsphere-csi-syncer Removed

n/a

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.26.0-95-g95f0130/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.41.14

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.41.14.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.41.14.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.41.14

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-240.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-305.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch Updated

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards Updated

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-56.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.16.2.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240828023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240828023017

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240828023020

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240828023011

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240828023010

grafana Updated

mirantis.azurecr.io/stacklight/grafana:10.4.3

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240828023017

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.27-20240828023016

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240828023014

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240626023010

node-exporter Updated

mirantis.azurecr.io/stacklight/node-exporter:v1.8.2

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-10

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240828023015

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240828023010

openstack-refapp Updated

mirantis.azurecr.io/openstack/openstack-refapp:0.1.8

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240828023016

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240828023017

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240828023018

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240828023016

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20240701095027

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240828023015

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240828023010

stacklight-toolkit Removed

n/a

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240828023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240828023013

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240828023017

System and MCR artifacts
Unsupported Cluster releases

This section describes the release notes for the unsupported Cluster releases. For details about supported Cluster releases, see Cluster releases (managed).

17.2.x series

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

This section outlines release notes for major and patch Cluster releases of the 17.2.x series dedicated for Mirantis OpenStack for Kubernetes (MOSK).

17.2.7

This section includes release notes for the patch Cluster release 17.2.7 that is introduced in the Container Cloud patch release 2.28.3 and is based on the previous Cluster releases of the 17.2.x series.

This patch Cluster release introduces MOSK 24.2.5 that is based on Mirantis Kubernetes Engine 3.7.16 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.11.

This section lists the artifacts of components included in the Cluster release 17.2.7.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.40.29.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.40.29.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-a68c7101-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-a68c7101-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.28.3-2.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v18.2.4-10.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.28.3-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-24.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-6.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-6.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-6.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-6.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-6.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.13.5-26

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-6.cve

Core artifacts
Core artifacts

Artifact

Component

Path

Helm charts

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.40.29.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.40.29.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.40.29.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.40.29.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.40.29.tgz

Docker images

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-22

client-certificate-controller

mirantis.azurecr.io/core/client-certificate-controller:1.40.29

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-7

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-7

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-7

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-7

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-6

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-7

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-9

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-22

policy-controller

mirantis.azurecr.io/core/policy-controller:1.40.29

LCM artifacts

Artifact

Component

Path

Binaries

lcm-agent

mirantis.azurecr.io/core/bin/lcm-agent-1.40.29

lcm-ansible Updated

mirantis.azurecr.io/lcm/bin/lcm-ansible/v0.25.0-45-g957de77/lcm-ansible.tar.gz

Helm charts

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.40.29.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.40.29.tgz

Docker images

helm-controller

mirantis.azurecr.io/core/helm-controller:1.40.29

mcc-haproxy Updated

mirantis.azurecr.io/lcm/mcc-haproxy:v0.25.0-45-g957de77

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.25.0-45-g957de77

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-238.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-317.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards Updated

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-57.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-61.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.15.14.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20241118023015

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20241118023015

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20241022074315

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20241119091011

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20241118023015

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20241118023015

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20241115071117

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.27-20241118023017

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20241118023015

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20241118023015

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-13

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2.17-20241118023016

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2.17-20241118023015

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1-20240925023021

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20241118023016

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20241118023015

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20241118023015

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240925023019

psql-client Removed

n/a

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20241118023015

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20241021111607

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20241118023015

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20241118023015 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20241115074302 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20241118023015

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20241118023015

System and MCR artifacts
17.2.6

This section includes release notes for the patch Cluster release 17.2.6 that is introduced in the Container Cloud patch release 2.28.2 and is based on the previous Cluster releases of the 17.2.x series.

This patch Cluster release introduces MOSK 24.2.4 that is based on Mirantis Kubernetes Engine 3.7.16 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.11.

This section lists the artifacts of components included in the Cluster release 17.2.6.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.40.29.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.40.29.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-a68c7101-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-a68c7101-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.28.2-1.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v18.2.4-8.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.28.2-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-22.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-6.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-6.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-6.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-6.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-6.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.13.5-23

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-6.cve

Core artifacts
Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.40.29.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.40.29.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.40.29.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.40.29.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.40.29.tgz

Docker images

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-22

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.40.29

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-7

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-7

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-7

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-7

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-6

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-7

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-9

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-22

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.40.29

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.25.0-44-g561405b/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.40.29

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.40.29.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.40.29.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.40.29

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.25.0-44-g561405b

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.25.0-44-g561405b

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-238.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-309.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-61.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.15.13.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20241028023014

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20241028023014

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20241022074315

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20241028023016

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20241028023014

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20241028023014

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20241021111512

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.27-20240925023021

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20241028023016

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20241028023014

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-13

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20241028023015

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20241028023015

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1-20240925023021

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20241028023016

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20241028023015

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20241028023016

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240925023019

psql-client Deprecated

mirantis.azurecr.io/scale/psql-client:v13-20241029083652

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20241028023015

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20241021111607

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20241028023014

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20241028023014 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20241018175310

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20241028023014

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20241028023014

System and MCR artifacts
17.2.5

This section includes release notes for the patch Cluster release 17.2.5 that is introduced in the Container Cloud patch release 2.28.1 and is based on the previous Cluster releases of the 17.2.x series.

This patch Cluster release introduces MOSK 24.2.3 that is based on Mirantis Kubernetes Engine 3.7.15 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.11.

This section lists the artifacts of components included in the Cluster release 17.2.5.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.40.28.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.40.28.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

metallb-controller Updated

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-a68c7101-amd64

metallb-speaker Updated

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-a68c7101-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.28.1-5.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-7.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.28.1-4

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-21.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-6.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-6.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-6.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-6.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-6.cve

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-22

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-6.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.40.28.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.40.28.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.40.28.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.40.28.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.40.28.tgz

Docker images Updated

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-22

client-certificate-controller

mirantis.azurecr.io/core/client-certificate-controller:1.40.28

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-7

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-7

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-7

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-7

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-6

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-7

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-9

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-22

policy-controller

mirantis.azurecr.io/core/policy-controller:1.40.28

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.25.0-44-g561405b/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.40.28

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.40.28.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.40.28.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.40.28

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.25.0-44-g561405b

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.25.0-44-g561405b

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-238.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-309.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.15.12.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20241021023014

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20241021023014

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20241022074315

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20241021023015

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20241021023014

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20241022103051

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20241021111512

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.27-20240925023021

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20241021023015

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20241021023014

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-12

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20241021023014

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20241021023014

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240925023021

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20241021023015

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20241021023014

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20241021023015

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240925023019

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20240924065857

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20241021023015

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20241021111607

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20241021023014

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20241021023014 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20241018175310 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20241021023014

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20241021023014

System and MCR artifacts
17.2.4

This section includes release notes for the patch Cluster release 17.2.4 that is introduced in the Container Cloud patch release 2.27.4 and is based on the previous Cluster releases of the 17.2.x series.

This patch Cluster release introduces MOSK 24.2.2 that is based on Mirantis Kubernetes Engine 3.7.12 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.11.

This section lists the artifacts of components included in the Cluster release 17.2.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.27.3-10.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-4.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.27.3-9

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-18.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-5.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-5.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-5.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-5.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-5.cve

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-19

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-5.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.40.23.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.40.23.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.40.23.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.40.23.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.40.23.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.40.23.tgz

Docker images

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-18

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.40.23

csi-attacher Updated

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-6

csi-node-driver-registrar Updated

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-6

csi-provisioner Updated

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-6

csi-resizer Updated

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-6

csi-snapshotter Updated

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-5

livenessprobe Updated

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-6

metrics-server Updated

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-8

openstack-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-18

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.40.23

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.25.0-42-g8710cbe/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.40.23

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.40.23.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.40.23.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.40.23

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-238.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-300.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.15.9.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240821023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240821023016

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240821023017

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240821023015

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240821023009

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240821023018

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.27-20240821023018

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240821023015

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240626023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-10

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240821023015

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240821023010

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240821023018

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240821023016

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240821023016

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240821023015

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20240701095027

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240821023015

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240822083023

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240821023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240821023015

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240821023021

System and MCR artifacts
17.2.3

This section includes release notes for the patch Cluster release 17.2.3 that is introduced in the Container Cloud patch release 2.27.3 and is based on the Cluster releases 17.2.0 and 16.2.3.

This patch Cluster release introduces MOSK 24.2.1 that is based on Mirantis Kubernetes Engine 3.7.12 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.11.

This section lists the artifacts of components included in the Cluster release 17.2.3.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.27.3-8.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-3.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.27.3-7

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-17.release

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-5.release

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-5.release

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-5.release

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-5.release

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-5.release

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-18

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-5.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.40.21.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.40.21.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.40.21.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.40.21.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.40.21.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.40.21.tgz

Docker images

cinder-csi-plugin Updated

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-18

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.40.21

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-5

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-5

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-5

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-5

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-4

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-5

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-7

openstack-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-18

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.40.21

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.25.0-42-g8710cbe/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.40.21

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.40.21.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.40.21.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.40.21

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-238.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-300.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch Updated

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.15.8.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240807023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240807023013

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240807023018

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240807023010

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240807023010

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240807023017

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.27-20240812134116

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240807023014

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240626023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-9

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240807023012

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240807023009

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240807023019

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240807023016

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240807023017

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240807023018

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20240701095027

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240807023014

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240807023011

stacklight-toolkit Removed

n/a

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240605023010 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240812121935

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240807023014

System and MCR artifacts
17.2.0

This section outlines release notes for the major Cluster release 17.2.0 that is introduced in the Container Cloud release 2.27.0. This Cluster release is based on the Cluster release 16.2.0. The Cluster release 17.2.0 supports:

For the list of known and addressed issues, refer to the Container Cloud release 2.27.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 17.2.0 that is introduced in the Container Cloud release 2.27.0.

Support for MKE 3.7.8

Introduced support for Mirantis Kubernetes Engine (MKE) 3.7.8 that supports Kubernetes 1.27. On existing clusters, MKE is updated to the latest supported version when you update your managed cluster to the Cluster release 17.2.0.

Note

This enhancement applies to users who follow the update train using major releases. Users who install patch releases, have already obtained MKE 3.7.8 in Container Cloud 2.26.4 (Cluster release 17.1.4).

Improvements in the MKE benchmark compliance

Analyzed and fixed the majority of failed compliance checks in the MKE benchmark compliance for Container Cloud core components and StackLight. The following controls were analyzed:

Control ID

Component

Control description

Analyzed item

5.1.2

client-certificate-controller
helm-controller
local-volume-provisioner

Minimize access to secrets

ClusterRoles with get, list, and watch access to Secret objects in a cluster

5.1.4

local-volume-provisioner

Minimize access to create pods

ClusterRoles with the create access to pod objects in a cluster

5.2.5

client-certificate-controller
helm-controller
policy-controller
stacklight

Minimize the admission of containers with allowPrivilegeEscalation

Containers with allowPrivilegeEscalation capability enabled

Automatic upgrade of Ceph from Quincy to Reef

Upgraded Ceph major version from Quincy 17.2.7 (17.2.7-12.cve in the patch release train) to Reef 18.2.3 with an automatic upgrade of Ceph components on existing managed clusters during the Cluster version update.

Ceph Reef delivers new version of RocksDB which provides better IO performance. Also, this version supports RGW multisite re-sharding and contains overall security improvements.

Support for Rook v1.13 in Ceph

Added support for Rook v1.13 that contains the Ceph CSI plugin 3.10.x as the default supported version. For a complete list of features and breaking changes, refer to official Rook documentation.

Setting a configuration section for Rook parameters

Implemented the section option for the rookConfig parameter that enables you to specify the section where a Rook parameter must be placed. The use of this option enables restart of only specific daemons related to the corresponding section instead of restarting all Ceph daemons except Ceph OSD.

Monitoring of I/O errors in kernel logs

Implemented monitoring of disk along with I/O errors in kernel logs to detect hardware and software issues. The implementation includes the dedicated KernelIOErrorsDetected alert, the kernel_io_errors_total metric that is collected on the Fluentd side using the I/O error patterns, and general refactoring of metrics created in Fluentd.

S.M.A.R.T. metrics for creating alert rules on bare metal clusters

Added documentation describing usage examples of alert rules based on S.M.A.R.T. metrics to monitor disk information on bare metal clusters.

The StackLight telegraf-ds-smart exporter uses the S.M.A.R.T. plugin to obtain detailed disk information and export it as metrics. S.M.A.R.T. is a commonly used system across vendors with performance data provided as attributes.

Improvements for OpenSearch and OpenSearch Indices Grafana dashboards

Improved performance and UX visibility of the OpenSearch and OpenSearch Indices Grafana dashboards as well as added the capability to minimize the number of indices to be displayed on dashboards.

Removal of grafana-image-renderer from StackLight

As part of StackLight refactoring, removed grafana-image-renderer from the Grafana installation in Container Cloud. StackLight uses this component only for image generation in the Grafana web UI, which can be easily replaced with standard screenshots.

The improvement optimizes resources usage and prevents potential CVEs that frequently affect this component.

Components versions

The following table lists the components versions of the Cluster release 17.2.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.7.8 0

Container runtime Updated

Mirantis Container Runtime

23.0.11 1

Distributed storage

Ceph

18.2.3-1.release (Reef)

Rook

1.13.5-15

LCM Updated

helm-controller

1.40.11

lcm-ansible

0.25.0-37-gc15c97d

lcm-agent

1.40.11

StackLight

Alerta

9.0.1

Alertmanager

0.25.0

Alertmanager Webhook ServiceNow

0.1

Blackbox Exporter

0.24.0

cAdvisor

0.47.2

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.5.0

Fluentd

1.15.3

Grafana

10.3.1

Grafana Image Renderer Removed

n/a

kube-state-metrics

2.10.1

Metric Collector

0.1

Metricbeat

7.12.1

Node Exporter

1.7.0

OAuth2 Proxy

7.1.3

OpenSearch

2.12.0

OpenSearch Dashboards

2.12.0

Prometheus

2.48.0

Prometheus ES Exporter

0.14.0

Prometheus MS Teams

1.5.2

Prometheus Patroni Exporter

0.0.1

Prometheus Postgres Exporter

0.15.0

Prometheus Relay

0.4

sf-notifier

0.4

sf-reporter

0.1

Spilo

13-2.1p9

Telegraf

1.9.1

1.30.2

Telemeter

4.4

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the artifacts of components included in the Cluster release 17.2.0.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.27.0-7.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.3-1.release

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.27.0-6

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-12.release

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-4.release

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-4.release

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-4.release

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-4.release

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-4.release

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-15

snapshot-controller New

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-4.release

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.25.0-37-gc15c97d/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.40.11

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.40.11.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.40.11.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.40.11

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-238.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-300.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch Updated

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-87.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.15.3.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web

mirantis.azurecr.io/stacklight/alerta-web:9-20240515023009

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0-20240515023012

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils

mirantis.azurecr.io/stacklight/alpine-utils:1-20240515023017

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240515023012

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240611084259

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer Removed

n/a

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240515023018

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.22-20240515023015

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240515023016

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240515023009

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-8

opensearch

mirantis.azurecr.io/stacklight/opensearch:2-20240515023012

opensearch-dashboards

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240515023010

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1-20240515023018

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240515023016

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240515023017

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240515023017

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20240222083402

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240515023012

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240515023010

stacklight-toolkit

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240515023016

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240515023008

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156

telemeter

mirantis.azurecr.io/stacklight/telemeter:4.4-20240515023015

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240515023012

System and MCR artifacts
17.1.x series

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

This section outlines release notes for deprecated major and patch Cluster releases of the 17.1.x series dedicated for Mirantis OpenStack for Kubernetes (MOSK).

17.1.7

This section includes release notes for the patch Cluster release 17.1.7 that is introduced in the Container Cloud patch release 2.27.2 and is based on the previous Cluster releases of the 17.1.x series series.

This patch Cluster release introduces MOSK 24.1.7 that is based on Mirantis Kubernetes Engine 3.7.11 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.9.

This section lists the artifacts of components included in the Cluster release 17.1.7.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.27.1-6.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.7-15.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.27.1-5

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-15.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-3.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-3.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-3.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-3.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-3.cve

rook

mirantis.azurecr.io/ceph/rook:v1.12.10-21

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.39.31.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.39.31.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.39.31.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.39.31.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.39.31.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.39.31.tgz

vsphere-cloud-controller-manager

https://binary.mirantis.com/core/helm/vsphere-cloud-controller-manager-1.39.31.tgz

vsphere-csi-plugin

https://binary.mirantis.com/core/helm/vsphere-csi-plugin-1.39.31.tgz

Docker images

cinder-csi-plugin Updated

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-17

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.39.31

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-5

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-5

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-5

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-5

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-4

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-5

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-7

openstack-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-17

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.39.31

vsphere-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/vsphere-cloud-controller-manager:v1.27.0-6

vsphere-csi-driver

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-driver:v3.0.2-1

vsphere-csi-syncer

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-syncer:v3.0.2-1

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.24.0-52-gd8adaba/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.39.31

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.39.31.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.39.31.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.39.31

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-223.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-290.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.14.15.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240710023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240710023018

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240710023020

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240710023011

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240710023010

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20240318142141

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240710023020

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240710023014

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240710023014

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240626023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-9

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240710023013

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240710023010

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240710023020

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240710023017

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240710023019

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240710023019

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20240701095027

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240710023015

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240710023011

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240710023015

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240605023010

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240710023015

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240710023019

System and MCR artifacts
17.1.6

This section includes release notes for the patch Cluster release 17.1.6 that is introduced in the Container Cloud patch release 2.27.1 and is based on the previous Cluster releases of the 17.1.x series series.

This patch Cluster release introduces MOSK 24.1.6 that is based on Mirantis Kubernetes Engine 3.7.10 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.9, in which docker-ee-cli was updated to version 23.0.13 to fix several CVEs.

This section lists the artifacts of components included in the Cluster release 17.1.6.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.27.1-6.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v17.2.7-15.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.27.1-5

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-15.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-3.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-3.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-3.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-3.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-3.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.12.10-21

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.39.29.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.39.29.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.39.29.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.39.29.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.39.29.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.39.29.tgz

vsphere-cloud-controller-manager

https://binary.mirantis.com/core/helm/vsphere-cloud-controller-manager-1.39.29.tgz

vsphere-csi-plugin

https://binary.mirantis.com/core/helm/vsphere-csi-plugin-1.39.29.tgz

Docker images

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-16

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.39.29

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-5

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-5

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-5

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-5

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-4

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-5

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-7

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-16

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.39.29

vsphere-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/vsphere-cloud-controller-manager:v1.27.0-6

vsphere-csi-driver

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-driver:v3.0.2-1

vsphere-csi-syncer

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-syncer:v3.0.2-1

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.24.0-52-gd8adaba/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.39.29

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.39.29.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.39.29.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.39.29

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-223.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-290.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch Updated

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.14.14.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240701140358

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240701140403

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240701140404

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240701140359

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240701140357

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20240318142141

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240701140403

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240701140401

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240701140400

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240626023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-8

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240701140359

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240701140352

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240701140404

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240701140403

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240701140402

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240701140404

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20240222083402

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240701140403

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240701140359

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240701140402

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240605023010 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240701140401

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240701140402

System and MCR artifacts
17.1.5

This section includes release notes for the patch Cluster release 17.1.5 that is introduced in the Container Cloud patch release 2.26.5 and is based on the previous Cluster releases of the 17.1.x series series.

This patch Cluster release introduces MOSK 24.1.5 that is based on Mirantis Kubernetes Engine 3.7.8 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.9.

This section lists the artifacts of components included in the Cluster release 17.1.5.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.26.5-1.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v17.2.7-13.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.26.5-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-10.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-3.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-3.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-3.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-3.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-3.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.12.10-19

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.24.0-47-gf77368e/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.39.28

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.39.28.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.39.28.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.39.28

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-223.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-290.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-86.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.14.11.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240515023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240515023012

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240515023017

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240515023012

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240515023009

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20240318142141

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240515023018

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240515023015

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240515023016

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240515023009

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-8

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240515023012

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240515023010

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240515023018

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240515023016

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240515023017

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240515023017

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20240222083402

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240515023012

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240515023010

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240515023016

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240515023008 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240515023015

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240515023012

System and MCR artifacts
17.1.4

This section includes release notes for the patch Cluster release 17.1.4 that is introduced in the Container Cloud patch release 2.26.4 and is based on the previous Cluster releases of the 17.1.x series series.

This patch Cluster release introduces MOSK 24.1.4 that is based on Mirantis Kubernetes Engine 3.7.8 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.9.

This section lists the artifacts of components included in the Cluster release 17.1.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.26.4-1.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v17.2.7-12.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.26.4-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-9.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-3.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-3.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-3.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-3.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-3.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.12.10-18

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.24.0-47-gf77368e/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.39.26

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.39.26.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.39.26.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.39.26

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-223.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-290.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-86.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.14.10.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240424023010

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240424023016

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240424023018

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240424023015

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240424023010

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20240318142141

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240424023020

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240424023017

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240424023015

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240424023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-8

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240424023015

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240424023010

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240424023020

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240424023018

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240424023018

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240424023017

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20240222083402

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240424023015

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240424023015

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240424023017

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240424023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240424023014

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240424023015

System and MCR artifacts
17.1.3

This section includes release notes for the patch Cluster release 17.1.3 that is introduced in the Container Cloud patch release 2.26.3 and is based on the previous Cluster releases of the 17.1.x series series.

This patch Cluster release introduces MOSK 24.1.3 that is based on Mirantis Kubernetes Engine 3.7.7 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.9.

This section lists the artifacts of components included in the Cluster release 17.1.3.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.26.3-1.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.7-11.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.26.3-0

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-8.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-3.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-3.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-3.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-3.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-3.cve

rook

mirantis.azurecr.io/ceph/rook:v1.12.10-17

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.24.0-47-gf77368e/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.39.23

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.39.23.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.39.23.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.39.23

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-223.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-290.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-86.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.14.9.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240403023008

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240408080051

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240403023017

blackbox-exporter Updated

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240408140050

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240403023009

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20240318142141

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240403023017

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240403023014

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240408155718

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240408135717

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-8

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240403023014

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240403023009

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240403023017

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240403023016

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240403023017

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240408135804

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20240222083402

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240403023015

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240403023013

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240403023016

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240403023008 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240306130859

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240408155750

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240408155738

System and MCR artifacts
17.1.2

This section includes release notes for the patch Cluster release 17.1.2 that is introduced in the Container Cloud patch release 2.26.2 and is based on the Cluster releases 17.1.1 and 17.1.0.

This patch Cluster release introduces MOSK 24.1.2 that is based on Mirantis Kubernetes Engine 3.7.6 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.9, in which docker-ee-cli was updated to version 23.0.10 to fix several CVEs.

This section lists the artifacts of components included in the Cluster release 17.1.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.26.2-4.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.7-10.release

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.26.2-3

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-7.release

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-2.release

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-2.release

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-2.release

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-2.release

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-2.release

rook

mirantis.azurecr.io/ceph/rook:v1.12.10-16

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.24.0-47-gf77368e/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.39.19

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.39.19.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.39.19.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.39.19

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-223.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-290.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-86.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.14.8.tgz

telegraf-ds Updated

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s Updated

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240318062240

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240318062244

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240318062249

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20231204053401

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240318062245

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240318062244

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20240318142141

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240318062249

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240318062246

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240318062249

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240318062240

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-7

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240318062244

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240318062241

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240318062240

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240318062248

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20231204064415

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240318062250

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240318062249

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20240222083402

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240318062246

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240318062245

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240318062247

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240318062240 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240306130859 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240318062245

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240318062247

System and MCR artifacts
17.1.1

This section includes release notes for the patch Cluster release 17.1.1 that is introduced in the Container Cloud patch release 2.26.1 and is based on the Cluster release 17.1.0.

This patch Cluster release introduces MOSK 24.1.1 that is based on Mirantis Kubernetes Engine 3.7.5 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.9.

This section lists the artifacts of components included in the Cluster release 17.1.1.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.26.1-1.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v17.2.7-9.release

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.26.1-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-5.release

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-1.release

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-1.release

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-1.release

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-1.release

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-1.release

rook Updated

mirantis.azurecr.io/ceph/rook:v1.12.10-14

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.24.0-47-gf77368e/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.39.15

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.39.15.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.39.15.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.39.15

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-223.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-285.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch Updated

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-86.tgz

opensearch-dashboards Updated

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.14.7.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-40.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-41.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240228023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240228023011

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240226135626

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240228023020

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20231204053401

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240228023015

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240228023011

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20240228060359

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240228023018

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240228023017

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240228023015

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240228023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-7

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240228023015

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240228023009

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240228023020

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240228023015

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20231204064415

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240228023020

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240228023015

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20240222083402

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240228023016

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240226135743

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240228023016

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240228023017

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240228023008 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240219105842 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240228023013

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240228023014

System and MCR artifacts
17.1.0

This section outlines release notes for the major Cluster release 17.1.0 that is introduced in the Container Cloud release 2.26.0. This Cluster release is based on the Cluster release 16.1.0. The Cluster release 17.1.0 supports:

For the list of known and addressed issues, refer to the Container Cloud release 2.26.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 17.1.0 that is introduced in the Container Cloud release 2.26.0.

Support for MKE 3.7.5 and MCR 23.0.9

Introduced support for Mirantis Container Runtime (MCR) 23.0.9 and Mirantis Kubernetes Engine (MKE) 3.7.5 that supports Kubernetes 1.27.

On existing MOSK clusters, MKE and MCR are updated to the latest supported version when you update your cluster to the Cluster release 17.1.0.

Support for Rook v1.12 in Ceph

Added support for Rook v1.12 that contains the Ceph CSI plugin 3.9.x and introduces automated recovery of RBD (RWO) volumes from a failed node onto a new one, ensuring uninterrupted operations.

For a complete list of features introduced in the new Rook version, refer to official Rook documentation.

Support for custom device classes in a Ceph cluster

TechPreview

Implemented the customDeviceClasses parameter that enables you to specify the custom names different from the default ones, which include ssd, hdd, and nvme, and use them in nodes and pools definitions.

Using this parameter, you can, for example, separate storage of large snapshots without touching the rest of Ceph cluster storage.

Network policies for Rook Ceph daemons

To enhance network security, added NetworkPolicy objects for all types of Ceph daemons. These policies allow only specified ports to be used by the corresponding Ceph daemon pods.

Upgraded logging pipeline in StackLight

Completely reorganized and significantly improved the StackLight logging pipeline by implementing the following changes:

  • Switched to the storage-based log retention strategy that optimizes storage utilization and ensures effective data retention. This approach ensures that storage resources are efficiently allocated based on the importance and volume of different data types. The logging index management implies the following advantages:

    • Storage-based rollover mechanism

    • Consistent shard allocation

    • Minimal size of cluster state

    • Storage compression

    • No filter by logging level (filtering by tag is still available)

    • Control over disk space to be taken by indices types:

      • Logs

      • OpenStack notifications

      • Kubernetes events

  • Introduced new system and audit indices that are managed by OpenSearch data streams. It is a convenient way to manage insert-only pipelines such as log message collection.

  • Introduced the OpenSearchStorageUsageCritical and OpenSearchStorageUsageMajor alerts to monitor OpenSearch used and free space from the file system perspective.

  • Introduced the following parameters:

    • persistentVolumeUsableStorageSizeGB to define exclusive OpenSearch node usage

    • output_kind to define the type of logs to be forwarded to external outputs

Important

Changes in the StackLight logging pipeline require the following actions before and after the manged cluster update:

Support for custom labels during alert injection

Added the alertsCommonLabels parameter for Prometheus server that defines the list of custom labels to be injected to firing alerts while they are sent to Alertmanager.

Caution

When new labels are injected, Prometheus sends alert updates with a new set of labels, which can potentially cause Alertmanager to have duplicated alerts for a short period of time if the cluster currently has firing alerts.

Components versions

The following table lists the components versions of the Cluster release 17.1.0.

Component

Application/Service

Version

Cluster orchestration Updated

Mirantis Kubernetes Engine

3.7.5 0

Container runtime Updated

Mirantis Container Runtime

23.0.9 1

Distributed storage Updated

Ceph

17.2.7 (Quincy)

Rook

1.12.10

StackLight

Alerta Updated

9.0.1

Alertmanager

0.25.0

Alertmanager Webhook ServiceNow

0.1

Blackbox Exporter

0.24.0

cAdvisor

0.47.2

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.5.0

Fluentd

1.15.3

Grafana Updated

10.3.1

Grafana Image Renderer Updated

3.8.4

kube-state-metrics Updated

2.10.1

Metric Collector

0.1

Metricbeat

7.12.1

Node Exporter Updated

1.7.0

OAuth2 Proxy

7.1.3

OpenSearch Updated

2.11.0

OpenSearch Dashboards Updated

2.11.1

Prometheus Updated

2.48.0

Prometheus ES Exporter

0.14.0

Prometheus MS Teams

1.5.2

Prometheus Patroni Exporter

0.0.1

Prometheus Postgres Exporter Updated

0.15.0

Prometheus Relay

0.4

sf-notifier

0.4

sf-reporter

0.1

Spilo

13-2.1p9

Telegraf

1.9.1

1.28.5 Updated

Telemeter

4.4

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the artifacts of components included in the Cluster release 17.1.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.26.0-16.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.7-8.release

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.26.0-15

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-4.release

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-1.release

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-1.release

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-1.release

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-1.release

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-1.release

rook

mirantis.azurecr.io/ceph/rook:v1.12.10-13

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.24.0-46-gdaf7dbc/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.39.13

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.39.13.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.39.13.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.39.13

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta Updated

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow Updated

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor Updated

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-219.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-278.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat Updated

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch Updated

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-80.tgz

opensearch-dashboards Updated

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-53.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams Updated

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp Updated

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.14.2.tgz

telegraf-ds Updated

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-40.tgz

telegraf-s Updated

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-41.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240201074016

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240201074016

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240119023014

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240201074025

blackbox-exporter Updated

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20231204053401

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240201074020

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq Removed

n/a

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20231215023011

grafana Updated

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20231124023009

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240201074025

kube-state-metrics Updated

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240201074022

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240201074019

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240201074016

node-exporter Updated

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-7

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240201074019

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240201074016

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240201074024

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240201074023

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20231204064415

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240201074021

prometheus-postgres-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240201074019

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20240117093252

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240201074022

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240119124536

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240201074020

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240201074021

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240201074016 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240201074023 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240201074019

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240201074020

System and MCR artifacts
17.0.x series

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

This section outlines release notes for unsupported major and patch Cluster 17.0.x series dedicated for Mirantis OpenStack for Kubernetes (MOSK).

17.0.4

This section includes release notes for the patch Cluster release 17.0.4 that is introduced in the Container Cloud patch release 2.25.4 and is based on Cluster releases 17.0.0, 17.0.1, 17.0.2, and 17.0.3.

This patch Cluster release introduces MOSK 23.3.4 that is based on Mirantis Kubernetes Engine 3.7.3 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.7.

This section lists the artifacts of components included in the Cluster release 17.0.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.25.4-1

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.6-8.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.25.4-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.8.1-9.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-2.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-2.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-2.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-2.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-2.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.11.11-22

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.23.0-88-g35be0fc/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.38.33

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.38.33.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.38.33.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.38.33

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-6.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-196.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-254.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-63.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-49.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-257.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.13.12.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-40.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-40.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20231215023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20231215023011

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20231211141923

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20231215023021

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20231204053401

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20231215023012

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq

mirantis.azurecr.io/scale/curl-jq:alpine-20231127081128

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20231215023011

grafana

mirantis.azurecr.io/stacklight/grafana:10.2.2

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20231124023009

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20231215023018

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20231226150248

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20231215023013

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20231215023009

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-6

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20231215023014

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20231215023009

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20231215023019

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20231215023018

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20231204064415

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20231215023018

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20231215023011

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20231116082249

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20231215023014

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20231211141939

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20231215023013

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20231215023015

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20231215023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20231204142011

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20231215023013

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20231215023013

System and MCR artifacts

Unchanged as compared to 17.0.0

17.0.3

This section includes release notes for the patch Cluster release 17.0.3 that is introduced in the Container Cloud patch release 2.25.3 and is based on Cluster releases 17.0.0, 17.0.1, and 17.0.2.

This patch Cluster release introduces MOSK 23.3.3 that is based on Mirantis Kubernetes Engine 3.7.3 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.7.

This section lists the artifacts of components included in the Cluster release 17.0.3.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.25.3-3

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v17.2.6-8.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.25.3-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.8.1-8.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-2.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-2.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-2.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-2.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-2.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.11.11-21

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.23.0-87-gc9d7d3b/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.38.31

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.38.31.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.38.31.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.38.31

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-6.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-196.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-254.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-63.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-49.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-257.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.13.10.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-40.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-40.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20231201023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20231201023012

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20231114075954

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20231201023019

blackbox-exporter Updated

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20231204053401

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20231201023011

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq Updated

mirantis.azurecr.io/scale/curl-jq:alpine-20231127081128

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20231204142422

grafana Updated

mirantis.azurecr.io/stacklight/grafana:10.2.2

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20231124023009

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20231201023018

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20231201023019

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20231201023014

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20231201023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-6

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20231201023011

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20231201023009

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20231201023014

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20231201023015

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20231204064415

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20231201023016

prometheus-postgres-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20231201023016

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20231116082249

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20231201023011

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20231110023016

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20231207134103

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20231201023015

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20231207133615 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20231204142011 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20231201023015

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20231201023012

System and MCR artifacts

Unchanged as compared to 17.0.0

17.0.2

This section includes release notes for the patch Cluster release 17.0.2 that is introduced in the Container Cloud patch release 2.25.2 and is based on Cluster releases 17.0.0 and 17.0.1.

This patch Cluster release introduces MOSK 23.3.2 that is based on Mirantis Kubernetes Engine 3.7.2 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.7.

This section lists the artifacts of components included in the Cluster release 17.0.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.25.2-3

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v17.2.6-5.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.25.2-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.8.1-6.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-2.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-2.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-2.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-2.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-2.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.11.11-17

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.23.0-84-g8d74d7c/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.38.29

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.38.29.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.38.29.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.38.29

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-6.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-196.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-254.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-63.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-49.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-57.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-257.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.13.8.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-40.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-40.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20231117023008

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20231121101237

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20231114075954

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20231117023019

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.24.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20231121100850

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq

mirantis.azurecr.io/scale/curl-jq:alpine-20231019061751

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20231117023010

grafana

mirantis.azurecr.io/stacklight/grafana:9.5.13

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20231030112043

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20231117023017

kube-state-metrics Updated

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20231117023017

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20231117023011

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20231117023008

node-exporter Updated

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-6

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20231121103248

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20231121104249

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20231117023020

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.44.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20231117023017

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20231117023018

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20231117023012

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20231116082249

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20231117023016

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20231110023016

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20231117023015

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20231117023017

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20231110023008 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20231030132045

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20231117023011

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20231117023011

System and MCR artifacts

Unchanged as compared to 17.0.0

17.0.1

This section includes release notes for the patch Cluster release 17.0.1 that is introduced in the Container Cloud patch release 2.25.1 and is based on the Cluster release 17.0.0.

This patch Cluster release introduces MOSK 23.3.1 that is based on Mirantis Kubernetes Engine 3.7.2 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.7.

This section lists the artifacts of components included in the Cluster release 17.0.1.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.25.1-9

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.6-2.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.25.1-8

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.8.1-4.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-2.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-2.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-2.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-2.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-2.cve

rook

mirantis.azurecr.io/ceph/rook:v1.11.11-15

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.23.0-84-g8d74d7c/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.38.22

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.38.22.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.38.22.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.38.22

StackLight artifacts

Artifact

Component

Path

Helm charts Updated

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-6.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-196.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-254.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-63.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-49.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-57.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-257.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight

https://binary.mirantis.com/stacklight/helm/stacklight-0.13.7.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-40.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-40.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20231103023010

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20231103023014

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20231027101957

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20231027023014

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.24.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20231027023014

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq Updated

mirantis.azurecr.io/scale/curl-jq:alpine-20231019061751

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20231027023015

grafana Updated

mirantis.azurecr.io/stacklight/grafana:9.5.13

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20231030112043

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20231030141315

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20231103023015

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20231103023010

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20231027023009

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.6.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-5

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20231103023014

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20231103023010

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20231103023015

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.44.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20231103023015

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20231103023015

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20231103023010

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20230817113822

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20231027023020

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230911151029

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20231103023014

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20231103023015

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20231103023010 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20231030132045 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20231027023011

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20231103023014

System and MCR artifacts

Unchanged as compared to 17.0.0

17.0.0

This section outlines release notes for the major Cluster release 17.0.0 that is introduced in the Container Cloud release 2.25.0. This Cluster release is based on the Cluster release 16.0.0. The Cluster release 17.0.0 supports:

For the list of known and addressed issues, refer to the Container Cloud release 2.25.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 17.0.0 that is introduced in the Container Cloud release 2.25.0.

Support for MKE 3.7.1 and MCR 23.0.7

Introduced support for Mirantis Container Runtime (MCR) 23.0.7 and Mirantis Kubernetes Engine (MKE) 3.7.1 that supports Kubernetes 1.27 for the Container Cloud management and managed clusters. On existing clusters, MKE and MCR are updated to the latest supported version when you update your managed cluster to the Cluster release 17.0.0.

Caution

Support for MKE 3.6.x is dropped. Therefore, new deployments on MKE 3.6.x are not supported.

Detailed view of a Ceph cluster summary in web UI

Implemented the Ceph Cluster details page in the Container Cloud web UI containing the Machines and OSDs tabs with a detailed descriptions and statuses of Ceph machines and Ceph OSDs comprising a Ceph cluster deployment.

Addressing storage devices using by-id identifiers

Implemented the capability to address Ceph storage devices using the by-id identifiers.

The by-id identifier is the only persistent device identifier for a Ceph cluster that remains stable after the cluster upgrade or any other maintenance. Therefore, Mirantis recommends using device by-id symlinks rather than device names or by-path symlinks.

Verbose Ceph cluster status

Added the kaasCephState field in the KaaSCephCluster.status specification to display the current state of KaasCephCluster and any errors during object reconciliation, including specification generation, object creation on a managed cluster, and status retrieval.

Fluentd log forwarding to Splunk

TechPreview

Added initial Technology Preview support for forwarding of Container Cloud services logs, which are sent to OpenSearch by default, to Splunk using the syslog external output configuration.

Ceph monitoring improvements

Implemented the following monitoring improvements for Ceph:

  • Optimized the following Ceph dashboards in Grafana: Ceph Cluster, Ceph Pools, Ceph OSDs.

  • Removed the redundant Ceph Nodes Grafana dashboard. You can view its content using the following dashboards:

    • Ceph stats through the Ceph Cluster dashboard.

    • Resource utilization through the System dashboard, which now includes filtering by Ceph node labels, such as ceph_role_osd, ceph_role_mon, and ceph_role_mgr.

  • Removed the rook_cluster alert label.

  • Removed the redundant CephOSDDown alert.

  • Renamed the CephNodeDown alert to CephOSDNodeDown.

Optimization of StackLight ‘NodeDown’ alerts

Optimized StackLight NodeDown alerts for a better notification handling after cluster recovery from an accident:

  • Reworked the NodeDown-related alert inhibition rules

  • Reworked the logic of all NodeDown-related alerts for all supported groups of nodes, which includes renaming of the <alertName>TargetsOutage alerts to <alertNameTargetDown>

  • Added the TungstenFabricOperatorTargetDown alert for Tungsten Fabric deployments of MOSK clusters

  • Removed redundant KubeDNSTargetsOutage and KubePodsNotReady alerts

OpenSearch performance optimization

Optimized OpenSearch configuration and StackLight datamodel to provide better resources utilization and faster query response. Added the following enhancements:

  • Limited the default namespaces for log collection with the ability to add custom namespaces to the monitoring list using the following parameters:

    • logging.namespaceFiltering.logs - limits the number of namespaces for Pods log collection. Enabled by default.

    • logging.namespaceFiltering.events - limits the number of namespaces for Kubernetes events collection. Disabled by default.

    • logging.namespaceFiltering.events/logs.extraNamespaces - adds extra namespaces, which are not in the default list, to collect specific Kubernetes Pod logs or Kubernetes events. Empty by default.

  • Added the logging.enforceOopsCompression parameter that enforces 32 GB of heap size, unless the defined memory limit allows using 50 GB of heap. Enabled by default.

  • Added the NO_SEVERITY severity label that is automatically added to a log with no severity label in the message. This allows having more control over which logs are actually being processed by Fluentd and which are skipped by mistake.

  • Added documentation on how to tune OpenSearch performance using hardware and software settings for baremetal-based Container Cloud clusters.

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added the documentation on how to export data from the Table panels of Grafana dashboards to CSV.

Components versions

The following table lists the components versions of the Cluster release 17.0.0.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.7.1 0

Container runtime

Mirantis Container Runtime

23.0.7 1

Distributed storage

Ceph

17.2.6 (Quincy)

Rook

1.11.11-13

LCM

helm-controller

1.38.17

lcm-ansible

0.23.0-73-g01aa9b3

lcm-agent

1.38.17

StackLight

Alerta

9.0.0

Alertmanager

0.25.0

Alertmanager Webhook ServiceNow

0.1

Blackbox Exporter

0.24.0

cAdvisor

0.47.2

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.5.0

Fluentd

1.15.3

Grafana

9.5.7

Grafana Image Renderer

3.7.1

kube-state-metrics

2.8.2

Metric Collector

0.1

Metricbeat

7.12.1

Node Exporter

1.6.0

OAuth2 Proxy

7.1.3

OpenSearch

2.8.0

OpenSearch Dashboards

2.7.0

Prometheus

2.44.0

Prometheus ES Exporter

0.14.0

Prometheus MS Teams

1.5.2

Prometheus Patroni Exporter

0.0.1

Prometheus Postgres Exporter

0.12.0

Prometheus Relay

0.4

sf-notifier

0.4

sf-reporter

0.1

Spilo

13-2.1p9

Telegraf

1.9.1

1.27.3

Telemeter

4.4

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the artifacts of components included in the Cluster release 17.0.0.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.25.0-1.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.6-rel-5

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.25.0-0

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.8.1-rel-1

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-cve-1

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-cve-1

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-cve-1

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-cve-1

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-cve-1

rook

mirantis.azurecr.io/ceph/rook:v1.11.11-13

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.23.0-73-g01aa9b3/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.38.17

Helm charts

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.38.17.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.38.17.tgz

Docker images

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.38.17

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-3.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-12.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-7.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-193.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-250.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.17.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-60.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-47.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-54.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-245.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-15.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-7.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-7.tgz

stacklight

https://binary.mirantis.com/stacklight/helm/stacklight-0.13.3.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-37.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-37.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web

mirantis.azurecr.io/stacklight/alerta-web:9-20230929023008

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0-20230929023012

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230912073324

alpine-utils

mirantis.azurecr.io/stacklight/alpine-utils:1-20230929023018

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.24.0

cadvisor

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230929023009

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq

mirantis.azurecr.io/scale/curl-jq:alpine-20230925094109

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.15-20230929023011

grafana

mirantis.azurecr.io/stacklight/grafana:9.5.7

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230929023011

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230929023017

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.22-20230929023018

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230929023015

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230929023009

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.6.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-4

opensearch

mirantis.azurecr.io/stacklight/opensearch:2-20230929023012

opensearch-dashboards

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20230929023008

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1-20230929023018

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.44.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230929023017

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230929023018

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230929023016

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20230817113822

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20230929023013

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230911151029

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230929023012

stacklight-toolkit

mirantis.azurecr.io/stacklight/stacklight-toolkit:20231004090138

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230915023009

mirantis.azurecr.io/stacklight/telegraf:1.27-20230809094327

telemeter

mirantis.azurecr.io/stacklight/telemeter:4.4-20230929023011

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230929023012

System and MCR artifacts
16.3.3

This section includes release notes for the patch Cluster release 16.3.3 that is introduced in the Container Cloud patch release 2.28.3 and is based on the previous Cluster releases of the 16.3.x series series.

This Cluster release supports Mirantis Kubernetes Engine 3.7.16 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.14.

  • For the list of CVE fixes delivered with this patch Cluster release, see 2.28.3

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.3.3.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.41.23.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.41.23.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-a68c7101-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-a68c7101-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.28.0-23.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v18.2.4-10.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.28.0-18

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-24.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-6.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-6.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-6.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-6.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-6.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.13.5-26

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-6.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.41.23.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.41.23.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.41.23.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.41.23.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.41.23.tgz

Docker images

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-22

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.41.23

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-7

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-7

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-7

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-7

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-6

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-7

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-9

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-22

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.41.23

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-agent

mirantis.azurecr.io/core/bin/lcm-agent-1.41.23

lcm-ansible

mirantis.azurecr.io/lcm/bin/lcm-ansible/v0.26.0-109-gf037937/lcm-ansible.tar.gz

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.41.23.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.41.23.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/helm-controller:1.41.23

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.26.0-109-gf037937

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.26.0-109-gf037937

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-241.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-317.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards Updated

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-57.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-61.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp Removed

n/a

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.16.6.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20241118023015

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20241118023015

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20241022074315

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20241119091011

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20241118023015

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20241118023015

grafana

mirantis.azurecr.io/stacklight/grafana:10.4.3

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20241115071117

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.27-20241118023017

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20241118023015

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20241118023015

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.8.2

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-13

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2.17-20241118023016

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2.17-20241118023015

openstack-refapp Removed

n/a

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1-20240925023021

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20241118023016

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20241118023015

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20241118023015

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240925023019

psql-client Removed

n/a

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20241118023015

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20241021111607

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20241118023015

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20241118023015 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20241115074302 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20241118023015

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20241118023015

System and MCR artifacts
16.3.2

This section includes release notes for the patch Cluster release 16.3.2 that is introduced in the Container Cloud patch release 2.28.2 and is based on the previous Cluster releases of the 16.3.x series series.

This Cluster release supports Mirantis Kubernetes Engine 3.7.16 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.14.

  • For the list of CVE fixes delivered with this patch Cluster release, see 2.28.2

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.3.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.41.22.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.41.22.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-a68c7101-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-a68c7101-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.28.0-20.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v18.2.4-8.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.28.0-18

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-22.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-6.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-6.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-6.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-6.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-6.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.13.5-23

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-6.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.41.22.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.41.22.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.41.22.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.41.22.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.41.22.tgz

Docker images

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-22

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.41.22

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-7

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-7

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-7

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-7

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-6

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-7

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-9

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-22

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.41.22

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.26.0-105-gc0b52a3/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.41.22

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.41.22.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.41.22.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.41.22

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.26.0-105-gc0b52a3

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.26.0-105-gc0b52a3

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-241.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-309.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-56.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-61.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp Deprecated

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.16.5.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20241028023014

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20241028023014

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20241022074315

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20241028023016

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20241028023014

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20241028023014

grafana

mirantis.azurecr.io/stacklight/grafana:10.4.3

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20241021111512

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.27-20240925023021

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20241028023016

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20241028023014

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.8.2

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-13

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20241028023015

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20241028023015

openstack-refapp Deprecated

mirantis.azurecr.io/openstack/openstack-refapp:0.1.11

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1-20240925023021

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20241028023016

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20241028023015

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20241028023016

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240925023019

psql-client Deprecated

mirantis.azurecr.io/scale/psql-client:v13-20241029083652

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20241028023015

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20241021111607

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20241028023014

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20241028023014 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20241018175310

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20241028023014

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20241028023014

System and MCR artifacts
16.3.1

This section includes release notes for the patch Cluster release 16.3.1 that is introduced in the Container Cloud patch release 2.28.1 and is based on the Cluster release 16.3.0.

This Cluster release supports Mirantis Kubernetes Engine 3.7.15 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.14.

  • For the list of CVE fixes delivered with this patch Cluster release, see 2.28.1

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.3.1.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.41.18.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.41.18.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

metallb-controller Updated

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-a68c7101-amd64

metallb-speaker Updated

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-a68c7101-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.28.0-19.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-6.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.28.0-18

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-20.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-6.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-6.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-6.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-6.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-6.cve

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-21

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-6.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.41.18.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.41.18.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.41.18.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.41.18.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.41.18.tgz

Docker images

cinder-csi-plugin Updated

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-22

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.41.18

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-7

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-7

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-7

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-7

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-6

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-7

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-9

openstack-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-22

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.41.18

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.26.0-105-gc0b52a3/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.41.18

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.41.18.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.41.18.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.41.18

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.26.0-105-gc0b52a3

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.26.0-105-gc0b52a3

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-240.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-309.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-56.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.16.4.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20241021023014

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20241021023014

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20241022074315

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20241021023015

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20241021023014

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20241022103051

grafana

mirantis.azurecr.io/stacklight/grafana:10.4.3

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20241021111512

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.27-20240925023021

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20241021023015

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20241021023014

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.8.2

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-12

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20241021023014

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20241021023014

openstack-refapp Updated

mirantis.azurecr.io/openstack/openstack-refapp:0.1.10

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240925023021

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20241021023015

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20241021023014

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20241021023015

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240925023019

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20240924065857

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20241021023015

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20241021111607

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20241021023014

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20241021023014 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20241018175310 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20241021023014

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20241021023014

System and MCR artifacts
16.2.x series

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

This section outlines release notes for major and patch Cluster releases of the 16.2.x series.

16.2.7

This section includes release notes for the patch Cluster release 16.2.7 that is introduced in the Container Cloud patch release 2.28.3 and is based on the previous Cluster releases of the 16.2.x series series.

This Cluster release supports Mirantis Kubernetes Engine 3.7.16 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.11.

  • For the list of CVE fixes delivered with this patch Cluster release, see 2.28.3

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.2.7.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.40.29.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.40.29.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-a68c7101-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-a68c7101-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.28.3-2.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v18.2.4-10.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.28.3-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-24.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-6.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-6.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-6.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-6.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-6.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.13.5-26

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-6.cve

Core artifacts

Artifact

Component

Path

Helm charts

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.40.29.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.40.29.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.40.29.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.40.29.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.40.29.tgz

Docker images

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-22

client-certificate-controller

mirantis.azurecr.io/core/client-certificate-controller:1.40.29

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-7

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-7

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-7

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-7

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-6

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-7

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-9

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-22

policy-controller

mirantis.azurecr.io/core/policy-controller:1.40.29

LCM artifacts

Artifact

Component

Path

Binaries

lcm-agent

mirantis.azurecr.io/core/bin/lcm-agent-1.40.29

lcm-ansible Updated

mirantis.azurecr.io/lcm/bin/lcm-ansible/v0.25.0-45-g957de77/lcm-ansible.tar.gz

Helm charts

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.40.29.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.40.29.tgz

Docker images

helm-controller

mirantis.azurecr.io/core/helm-controller:1.40.29

mcc-haproxy Updated

mirantis.azurecr.io/lcm/mcc-haproxy:v0.25.0-45-g957de77

mcc-keepalived Updated

mirantis.azurecr.io/lcm/mcc-keepalived:v0.25.0-45-g957de77

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-238.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-317.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards Updated

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-57.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-61.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp Removed

n/a

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.15.14.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20241118023015

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20241118023015

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20241022074315

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20241119091011

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20241118023015

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20241118023015

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20241115071117

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.27-20241118023017

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20241118023015

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20241118023015

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-13

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2.17-20241118023016

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2.17-20241118023015

openstack-refapp Removed

n/a

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1-20240925023021

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20241118023016

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20241118023015

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20241118023015

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240925023019

psql-client Removed

n/a

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20241118023015

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20241021111607

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20241118023015

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20241118023015 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20241115074302 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20241118023015

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20241118023015

System and MCR artifacts
16.2.6

This section includes release notes for the patch Cluster release 16.2.6 that is introduced in the Container Cloud patch release 2.28.2 and is based on the previous Cluster releases of the 16.2.x series series.

This Cluster release supports Mirantis Kubernetes Engine 3.7.16 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.11.

  • For the list of CVE fixes delivered with this patch Cluster release, see 2.28.2

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.2.6.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.40.29.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.40.29.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

metallb-controller

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-a68c7101-amd64

metallb-speaker

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-a68c7101-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.28.2-1.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v18.2.4-8.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.28.2-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-22.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-6.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-6.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-6.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-6.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-6.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.13.5-23

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-6.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.40.29.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.40.29.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.40.29.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.40.29.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.40.29.tgz

Docker images

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-22

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.40.29

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-7

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-7

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-7

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-7

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-6

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-7

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-9

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-22

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.40.29

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.25.0-44-g561405b/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.40.29

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.40.29.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.40.29.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.40.29

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.25.0-44-g561405b

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.25.0-44-g561405b

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-238.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-309.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-61.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp Deprecated

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.15.13.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20241028023014

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20241028023014

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20241022074315

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20241028023016

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20241028023014

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20241028023014

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20241021111512

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.27-20240925023021

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20241028023016

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20241028023014

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-13

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20241028023015

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20241028023015

openstack-refapp Deprecated

mirantis.azurecr.io/openstack/openstack-refapp:0.1.11

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1-20240925023021

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20241028023016

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20241028023015

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20241028023016

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240925023019

psql-client Deprecated

mirantis.azurecr.io/scale/psql-client:v13-20241029083652

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20241028023015

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20241021111607

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20241028023014

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20241028023014 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20241018175310

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20241028023014

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20241028023014

System and MCR artifacts
16.2.5

This section includes release notes for the patch Cluster release 16.2.5 that is introduced in the Container Cloud patch release 2.28.1 and is based on the previous Cluster releases of the 16.2.x series series.

This Cluster release supports Mirantis Kubernetes Engine 3.7.15 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.11.

  • For the list of CVE fixes delivered with this patch Cluster release, see 2.28.1

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.2.5.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Bare metal artifacts

Artifact

Component

Path

Helm charts Updated

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.40.28.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.40.28.tgz

Docker images

ironic

mirantis.azurecr.io/openstack/ironic:antelope-jammy-20240716113922

metallb-controller Updated

mirantis.azurecr.io/bm/metallb/controller:v0.14.5-a68c7101-amd64

metallb-speaker Updated

mirantis.azurecr.io/bm/metallb/speaker:v0.14.5-a68c7101-amd64

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.28.1-5.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-7.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.28.1-4

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-21.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-6.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-6.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-6.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-6.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-6.cve

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-22

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-6.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.40.28.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.40.28.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.40.28.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.40.28.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.40.28.tgz

Docker images Updated

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-22

client-certificate-controller

mirantis.azurecr.io/core/client-certificate-controller:1.40.28

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-7

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-7

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-7

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-7

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-6

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-7

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-9

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-22

policy-controller

mirantis.azurecr.io/core/policy-controller:1.40.28

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.25.0-44-g561405b/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.40.28

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.40.28.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.40.28.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.40.28

mcc-haproxy

mirantis.azurecr.io/lcm/mcc-haproxy:v0.25.0-44-g561405b

mcc-keepalived

mirantis.azurecr.io/lcm/mcc-keepalived:v0.25.0-44-g561405b

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-238.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-309.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.15.12.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20241021023014

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20241021023014

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20241022074315

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20241021023015

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20241021023014

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20241022103051

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20241021111512

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.27-20240925023021

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20241021023015

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20241021023014

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-12

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20241021023014

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20241021023014

openstack-refapp Updated

mirantis.azurecr.io/openstack/openstack-refapp:0.1.10

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240925023021

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14-20241021023015

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20241021023014

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20241021023015

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240925023019

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20240924065857

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20241021023015

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20241021111607

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20241021023014

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20241021023014 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20241018175310 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20241021023014

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20241021023014

System and MCR artifacts
16.2.4

This section includes release notes for the patch Cluster release 16.2.4 that is introduced in the Container Cloud patch release 2.27.4 and is based on the previous Cluster releases of the 16.2.x series series.

This Cluster release supports Mirantis Kubernetes Engine 3.7.12 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.11.

  • For the list of CVE fixes delivered with this patch Cluster release, see 2.27.4

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.2.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.27.3-10.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-4.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.27.3-9

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-18.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-5.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-5.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-5.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-5.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-5.cve

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-19

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-5.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.40.23.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.40.23.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.40.23.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.40.23.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.40.23.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.40.23.tgz

Docker images

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-18

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.40.23

csi-attacher Updated

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-6

csi-node-driver-registrar Updated

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-6

csi-provisioner Updated

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-6

csi-resizer Updated

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-6

csi-snapshotter Updated

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-5

livenessprobe Updated

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-6

metrics-server Updated

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-8

openstack-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-18

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.40.23

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.25.0-42-g8710cbe/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.40.23

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.40.23.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.40.23.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.40.23

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-238.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-300.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.15.9.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240821023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240821023016

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240821023017

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240821023015

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240821023009

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240821023018

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.27-20240821023018

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240821023015

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240626023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-10

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240821023015

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240821023010

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.8

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240821023018

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240821023016

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240821023016

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240821023015

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20240701095027

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240821023015

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240822083023

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240821023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240821023015

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240821023021

System and MCR artifacts
16.2.3

This section includes release notes for the patch Cluster release 16.2.3 that is introduced in the Container Cloud patch release 2.27.3 and is based on the previous Cluster releases of the 16.2.x series series.

This Cluster release supports Mirantis Kubernetes Engine 3.7.12 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.11.

  • For the list of CVE fixes delivered with this patch Cluster release, see 2.27.3

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.2.3.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.27.3-8.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.4-3.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.27.3-7

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-17.release

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-5.release

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-5.release

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-5.release

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-5.release

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-5.release

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-18

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-5.cve

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.40.21.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.40.21.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.40.21.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.40.21.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.40.21.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.40.21.tgz

vsphere-cloud-controller-manager Removed

n/a

vsphere-csi-plugin Removed

n/a

Docker images

cinder-csi-plugin Updated

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-18

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.40.21

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-5

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-5

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-5

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-5

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-4

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-5

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-7

openstack-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-18

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.40.21

vsphere-cloud-controller-manager Removed

n/a

vsphere-csi-driver Removed

n/a

vsphere-csi-syncer Removed

n/a

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.25.0-42-g8710cbe/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.40.21

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.40.21.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.40.21.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.40.21

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-238.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-300.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.15.8.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240807023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240807023013

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240807023018

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240807023010

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240807023010

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240807023017

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.27-20240812134116

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240807023014

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240626023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-9

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240807023012

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240807023009

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.8

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240807023019

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240807023016

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240807023017

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240807023018

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20240701095027

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240807023014

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240807023011

stacklight-toolkit Removed

n/a

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240605023010

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240812121935

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240807023014

System and MCR artifacts
16.2.2

This section includes release notes for the patch Cluster release 16.2.2 that is introduced in the Container Cloud patch release 2.27.2 and is based on the previous Cluster releases of the 16.2.x series series.

This Cluster release supports Mirantis Kubernetes Engine 3.7.11 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.11.

  • For the list of CVE fixes delivered with this patch Cluster release, see 2.27.2

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.2.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.27.0-13.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.3-2.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.27.0-12

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-14.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-4.release

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-4.release

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-4.release

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-4.release

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-4.release

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-16

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-4.release

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.40.18.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.40.18.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.40.18.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.40.18.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.40.18.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.40.18.tgz

vsphere-cloud-controller-manager

https://binary.mirantis.com/core/helm/vsphere-cloud-controller-manager-1.40.18.tgz

vsphere-csi-plugin

https://binary.mirantis.com/core/helm/vsphere-csi-plugin-1.40.18.tgz

Docker images

cinder-csi-plugin Updated

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-17

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.40.18

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-5

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-5

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-5

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-5

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-4

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-5

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-7

openstack-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-17

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.40.18

vsphere-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/vsphere-cloud-controller-manager:v1.27.0-6

vsphere-csi-driver

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-driver:v3.0.2-1

vsphere-csi-syncer

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-syncer:v3.0.2-1

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.25.0-40-g890ffca/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.40.18

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.40.18.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.40.18.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.40.18

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-238.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-300.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.15.6.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240710023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240710023018

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240710023020

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240710023011

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240710023010

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240710023020

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240710023014

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240710023014

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240626023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-9

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240710023013

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240710023010

openstack-refapp Updated

mirantis.azurecr.io/openstack/openstack-refapp:0.1.8

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240710023020

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240710023017

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240710023019

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240710023019

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20240701095027

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240710023015

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240710023011

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240710023015

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240605023010

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240710023015

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240710023019

System and MCR artifacts
16.2.1

This section includes release notes for the patch Cluster release 16.2.1 that is introduced in the Container Cloud patch release 2.27.1 and is based on the Cluster release 16.2.0.

This Cluster release supports Mirantis Kubernetes Engine 3.7.10 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.11, in which docker-ee-cli was updated to version 23.0.13 to fix several CVEs.

  • For the list of CVE fixes delivered with this patch Cluster release, see 2.27.1

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.2.1.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.27.0-13.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v18.2.3-2.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.27.0-12

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-14.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-4.release

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-4.release

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-4.release

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-4.release

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-4.release

rook Updated

mirantis.azurecr.io/ceph/rook:v1.13.5-16

snapshot-controller

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-4.release

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.40.15.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.40.15.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.40.15.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.40.15.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.40.15.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.40.15.tgz

vsphere-cloud-controller-manager

https://binary.mirantis.com/core/helm/vsphere-cloud-controller-manager-1.40.15.tgz

vsphere-csi-plugin

https://binary.mirantis.com/core/helm/vsphere-csi-plugin-1.40.15.tgz

Docker images

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-16

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.40.15

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-5

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-5

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-5

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-5

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-4

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-5

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-7

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-16

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.40.15

vsphere-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/vsphere-cloud-controller-manager:v1.27.0-6

vsphere-csi-driver

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-driver:v3.0.2-1

vsphere-csi-syncer

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-syncer:v3.0.2-1

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.25.0-40-g890ffca/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.40.15

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.40.15.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.40.15.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.40.15

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-238.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-300.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch Updated

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.15.5.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240701140358

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240701140403

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240701140404

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240701140359

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240701140357

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240701140403

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240701140401

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240701140400

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240626023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-8

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240701140359

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240701140352

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.7

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240701140404

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240701140403

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240701140402

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240701140404

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20240222083402

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240701140403

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240701140359

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240701140402

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240605023010 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240701140401

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240701140402

System and MCR artifacts
16.2.0

This section outlines release notes for the major Cluster release 16.2.0 that is introduced in the Container Cloud release 2.27.0. The Cluster release 16.2.0 supports:

  • Mirantis Kubernetes Engine (MKE) 3.7.8. For details, see MKE Release Notes.

  • Mirantis Container Runtime (MCR) 23.0.11. For details, see MCR Release Notes.

  • Kubernetes 1.27.

For the list of known and addressed issues, refer to the Container Cloud release 2.27.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 16.2.0 that is introduced in the Container Cloud release 2.27.0.

Support for MKE 3.7.8

Introduced support for Mirantis Kubernetes Engine (MKE) 3.7.8 that supports Kubernetes 1.27 for the Container Cloud management and managed clusters.

On existing managed clusters, MKE is updated to the latest supported version when you update your managed cluster to the Cluster release 16.2.0.

Note

This enhancement applies to users who follow the update train using major releases. Users who install patch releases, have already obtained MKE 3.7.8 in Container Cloud 2.26.4 (Cluster release 16.1.4).

Improvements in the MKE benchmark compliance

Analyzed and fixed the majority of failed compliance checks in the MKE benchmark compliance for Container Cloud core components and StackLight. The following controls were analyzed:

Control ID

Component

Control description

Analyzed item

5.1.2

client-certificate-controller
helm-controller
local-volume-provisioner

Minimize access to secrets

ClusterRoles with get, list, and watch access to Secret objects in a cluster

5.1.4

local-volume-provisioner

Minimize access to create pods

ClusterRoles with the create access to pod objects in a cluster

5.2.5

client-certificate-controller
helm-controller
policy-controller
stacklight

Minimize the admission of containers with allowPrivilegeEscalation

Containers with allowPrivilegeEscalation capability enabled

Automatic upgrade of Ceph from Quincy to Reef

Upgraded Ceph major version from Quincy 17.2.7 (17.2.7-12.cve in the patch release train) to Reef 18.2.3 with an automatic upgrade of Ceph components on existing managed clusters during the Cluster version update.

Ceph Reef delivers new version of RocksDB which provides better IO performance. Also, this version supports RGW multisite re-sharding and contains overall security improvements.

Support for Rook v1.13 in Ceph

Added support for Rook v1.13 that contains the Ceph CSI plugin 3.10.x as the default supported version. For a complete list of features and breaking changes, refer to official Rook documentation.

Setting a configuration section for Rook parameters

Implemented the section option for the rookConfig parameter that enables you to specify the section where a Rook parameter must be placed. The use of this option enables restart of only specific daemons related to the corresponding section instead of restarting all Ceph daemons except Ceph OSD.

Monitoring of I/O errors in kernel logs

Implemented monitoring of disk along with I/O errors in kernel logs to detect hardware and software issues. The implementation includes the dedicated KernelIOErrorsDetected alert, the kernel_io_errors_total metric that is collected on the Fluentd side using the I/O error patterns, and general refactoring of metrics created in Fluentd.

S.M.A.R.T. metrics for creating alert rules on bare metal clusters

Added documentation describing usage examples of alert rules based on S.M.A.R.T. metrics to monitor disk information on bare metal clusters.

The StackLight telegraf-ds-smart exporter uses the S.M.A.R.T. plugin to obtain detailed disk information and export it as metrics. S.M.A.R.T. is a commonly used system across vendors with performance data provided as attributes.

Improvements for OpenSearch and OpenSearch Indices Grafana dashboards

Improved performance and UX visibility of the OpenSearch and OpenSearch Indices Grafana dashboards as well as added the capability to minimize the number of indices to be displayed on dashboards.

Removal of grafana-image-renderer from StackLight

As part of StackLight refactoring, removed grafana-image-renderer from the Grafana installation in Container Cloud. StackLight uses this component only for image generation in the Grafana web UI, which can be easily replaced with standard screenshots.

The improvement optimizes resources usage and prevents potential CVEs that frequently affect this component.

Components versions

The following table lists the components versions of the Cluster release 16.2.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.7.8 0

Container runtime Updated

Mirantis Container Runtime

23.0.11 1

Distributed storage

Ceph

18.2.3-1.release (Reef)

Rook

1.13.5-15

LCM Updated

helm-controller

1.40.11

lcm-ansible

0.25.0-37-gc15c97d

lcm-agent

1.40.11

StackLight

Alerta

9.0.1

Alertmanager

0.25.0

Alertmanager Webhook ServiceNow

0.1

Blackbox Exporter

0.24.0

cAdvisor

0.47.2

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.5.0

Fluentd

1.15.3

Grafana

10.3.1

Grafana Image Renderer Removed

n/a

kube-state-metrics

2.10.1

Metric Collector

0.1

Metricbeat

7.12.1

Node Exporter

1.7.0

OAuth2 Proxy

7.1.3

OpenSearch

2.12.0

OpenSearch Dashboards

2.12.0

Prometheus

2.48.0

Prometheus ES Exporter

0.14.0

Prometheus MS Teams

1.5.2

Prometheus Patroni Exporter

0.0.1

Prometheus Postgres Exporter

0.15.0

Prometheus Relay

0.4

sf-notifier

0.4

sf-reporter

0.1

Spilo

13-2.1p9

Telegraf

1.9.1

1.30.2

Telemeter

4.4

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the artifacts of components included in the Cluster release 16.2.0.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.27.0-7.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v18.2.3-1.release

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.27.0-6

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-12.release

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-4.release

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-4.release

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-4.release

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-4.release

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-4.release

rook

mirantis.azurecr.io/ceph/rook:v1.13.5-15

snapshot-controller New

mirantis.azurecr.io/mirantis/snapshot-controller:v6.3.2-4.release

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.25.0-37-gc15c97d/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.40.11

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.40.11.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.40.11.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.40.11

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-238.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-300.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch Updated

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-87.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.15.3.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web

mirantis.azurecr.io/stacklight/alerta-web:9-20240515023009

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0-20240515023012

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils

mirantis.azurecr.io/stacklight/alpine-utils:1-20240515023017

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240515023012

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240611084259

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer Removed

n/a

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240515023018

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.22-20240515023015

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240515023016

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240515023009

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-8

opensearch

mirantis.azurecr.io/stacklight/opensearch:2-20240515023012

opensearch-dashboards

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240515023010

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.7

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1-20240515023018

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240515023016

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240515023017

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240515023017

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20240222083402

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240515023012

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240515023010

stacklight-toolkit

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240515023016

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240515023008

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156

telemeter

mirantis.azurecr.io/stacklight/telemeter:4.4-20240515023015

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240515023012

System and MCR artifacts
16.1.x series

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

This section outlines release notes for major and patch Cluster releases of the 16.1.x series.

16.1.7

This section includes release notes for the patch Cluster release 16.1.7 that is introduced in the Container Cloud patch release 2.27.2 and is based on the previous Cluster releases of the 16.1.x series series.

This Cluster release supports Mirantis Kubernetes Engine 3.7.11 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.9.

  • For the list of CVE fixes delivered with this patch Cluster release, see 2.27.2

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.1.7.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.27.1-6.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.7-15.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.27.1-5

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-15.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-3.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-3.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-3.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-3.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-3.cve

rook

mirantis.azurecr.io/ceph/rook:v1.12.10-21

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.39.31.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.39.31.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.39.31.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.39.31.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.39.31.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.39.31.tgz

vsphere-cloud-controller-manager

https://binary.mirantis.com/core/helm/vsphere-cloud-controller-manager-1.39.31.tgz

vsphere-csi-plugin

https://binary.mirantis.com/core/helm/vsphere-csi-plugin-1.39.31.tgz

Docker images

cinder-csi-plugin Updated

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-17

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.39.31

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-5

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-5

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-5

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-5

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-4

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-5

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-7

openstack-cloud-controller-manager Updated

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-17

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.39.31

vsphere-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/vsphere-cloud-controller-manager:v1.27.0-6

vsphere-csi-driver

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-driver:v3.0.2-1

vsphere-csi-syncer

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-syncer:v3.0.2-1

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.24.0-52-gd8adaba/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.39.31

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.39.31.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.39.31.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.39.31

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-223.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-290.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.14.15.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240710023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240710023018

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240710023020

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240710023011

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240710023010

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20240318142141

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240710023020

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240710023014

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240710023014

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240626023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-9

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240710023013

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240710023010

openstack-refapp Updated

mirantis.azurecr.io/openstack/openstack-refapp:0.1.8

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240710023020

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240710023017

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240710023019

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240710023019

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20240701095027

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240710023015

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240710023011

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240710023015

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240605023010

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240710023015

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240710023019

System and MCR artifacts
1

Only for bare metal clusters

16.1.6

This section includes release notes for the patch Cluster release 16.1.6 that is introduced in the Container Cloud patch release 2.27.1 and is based on the previous Cluster releases of the 16.1.x series series.

This Cluster release supports Mirantis Kubernetes Engine 3.7.10 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.9, in which docker-ee-cli was updated to version 23.0.13 to fix several CVEs.

  • For the list of CVE fixes delivered with this patch Cluster release, see 2.27.1

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.1.6.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.27.1-6.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v17.2.7-15.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.27.1-5

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-15.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-3.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-3.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-3.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-3.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-3.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.12.10-21

Core artifacts

Artifact

Component

Path

Helm charts Updated

cinder-csi-plugin

https://binary.mirantis.com/core/helm/cinder-csi-plugin-1.39.29.tgz

client-certificate-controller

https://binary.mirantis.com/core/helm/client-certificate-controller-1.39.29.tgz

local-volume-provisioner

https://binary.mirantis.com/core/helm/local-volume-provisioner-1.39.29.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.39.29.tgz

openstack-cloud-controller-manager

https://binary.mirantis.com/core/helm/openstack-cloud-controller-manager-1.39.29.tgz

policy-controller

https://binary.mirantis.com/core/helm/policy-controller-1.39.29.tgz

vsphere-cloud-controller-manager

https://binary.mirantis.com/core/helm/vsphere-cloud-controller-manager-1.39.29.tgz

vsphere-csi-plugin

https://binary.mirantis.com/core/helm/vsphere-csi-plugin-1.39.29.tgz

Docker images

cinder-csi-plugin

mirantis.azurecr.io/lcm/kubernetes/cinder-csi-plugin:v1.27.2-16

client-certificate-controller Updated

mirantis.azurecr.io/core/client-certificate-controller:1.39.29

csi-attacher

mirantis.azurecr.io/lcm/k8scsi/csi-attacher:v4.2.0-5

csi-node-driver-registrar

mirantis.azurecr.io/lcm/k8scsi/csi-node-driver-registrar:v2.7.0-5

csi-provisioner

mirantis.azurecr.io/lcm/k8scsi/csi-provisioner:v3.4.1-5

csi-resizer

mirantis.azurecr.io/lcm/k8scsi/csi-resizer:v1.7.0-5

csi-snapshotter

mirantis.azurecr.io/lcm/k8scsi/csi-snapshotter:v6.2.1-mcc-4

livenessprobe

mirantis.azurecr.io/lcm/k8scsi/livenessprobe:v2.9.0-5

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.6.3-7

openstack-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/openstack-cloud-controller-manager:v1.27.2-16

policy-controller Updated

mirantis.azurecr.io/core/policy-controller:1.39.29

vsphere-cloud-controller-manager

mirantis.azurecr.io/lcm/kubernetes/vsphere-cloud-controller-manager:v1.27.0-6

vsphere-csi-driver

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-driver:v3.0.2-1

vsphere-csi-syncer

mirantis.azurecr.io/lcm/kubernetes/vsphere-csi-syncer:v3.0.2-1

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.24.0-52-gd8adaba/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.39.29

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.39.29.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.39.29.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.39.29

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-223.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-290.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch Updated

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-88.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.14.14.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240701140358

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240701140403

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240701140404

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240701140359

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240701140357

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20240318142141

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240701140403

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240701140401

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240701140400

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240626023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-8

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240701140359

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240701140352

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.7

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240701140404

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240701140403

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240701140402

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240701140404

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20240222083402

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240701140403

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240701140359

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240701140402

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240605023010 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240701140401

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240701140402

System and MCR artifacts
1

Only for bare metal clusters

16.1.5

This section includes release notes for the patch Cluster release 16.1.5 that is introduced in the Container Cloud patch release 2.26.5 and is based on the previous Cluster releases of the 16.1.x series series.

This Cluster release supports Mirantis Kubernetes Engine 3.7.8 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.9.

  • For the list of CVE fixes delivered with this patch Cluster release, see 2.26.5

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.1.5.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.26.5-1.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v17.2.7-13.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.26.5-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-10.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-3.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-3.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-3.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-3.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-3.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.12.10-19

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.24.0-47-gf77368e/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.39.28

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.39.28.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.39.28.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.39.28

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-223.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-290.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-86.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.14.11.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240515023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240515023012

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240515023017

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240515023012

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240515023009

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20240318142141

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240515023018

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240515023015

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240515023016

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240515023009

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-8

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240515023012

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240515023010

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.7

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240515023018

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240515023016

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240515023017

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240515023017

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20240222083402

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240515023012

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240515023010

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240515023016

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240515023008 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240515023015

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240515023012

System and MCR artifacts
1

Only for bare metal clusters

16.1.4

This section includes release notes for the patch Cluster release 16.1.4 that is introduced in the Container Cloud patch release 2.26.4 and is based on the previous Cluster releases of the 16.1.x series series.

This Cluster release supports Mirantis Kubernetes Engine 3.7.8 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.9.

  • For the list of enhancements and CVE fixes delivered with this patch Cluster release, see 2.26.4

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.1.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.26.4-1.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v17.2.7-12.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.26.4-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-9.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-3.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-3.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-3.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-3.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-3.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.12.10-18

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.24.0-47-gf77368e/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.39.26

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.39.26.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.39.26.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.39.26

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-223.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-290.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-86.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.14.10.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240424023010

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240424023016

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240424023018

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240424023015

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240424023010

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20240318142141

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240424023020

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240424023017

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240424023015

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240424023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-8

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240424023015

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240424023010

openstack-refapp Updated

mirantis.azurecr.io/openstack/openstack-refapp:0.1.7

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240424023020

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240424023018

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240424023018

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240424023017

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20240222083402

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240424023015

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240424023015

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240424023017

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240424023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240426131156 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240424023014

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240424023015

System and MCR artifacts
1

Only for bare metal clusters

16.1.3

This section includes release notes for the patch Cluster release 16.1.3 that is introduced in the Container Cloud patch release 2.26.3 and is based on the previous Cluster releases of the 16.1.x series series.

This Cluster release supports Mirantis Kubernetes Engine 3.7.7 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.9.

  • For the list of enhancements and CVE fixes delivered with this patch Cluster release, see 2.26.3

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.1.3.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.26.3-1.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.7-11.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.26.3-0

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-8.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-3.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-3.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-3.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-3.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-3.cve

rook

mirantis.azurecr.io/ceph/rook:v1.12.10-17

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.24.0-47-gf77368e/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.39.23

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.39.23.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.39.23.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.39.23

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-223.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-290.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-86.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.14.9.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240403023008

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240408080051

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240403023017

blackbox-exporter Updated

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20240408080237

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240408140050

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240403023009

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20240318142141

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240403023017

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240403023014

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240408155718

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240408135717

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-8

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240403023014

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240403023009

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.6

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240403023017

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240403023016

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20240408080322

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240403023017

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240408135804

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20240222083402

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240403023015

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240403023013

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240403023016

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240403023008 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240306130859

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240408155750

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240408155738

System and MCR artifacts
1

Only for bare metal clusters

16.1.2

This section includes release notes for the patch Cluster release 16.1.2 that is introduced in the Container Cloud patch release 2.26.2 and is based on the Cluster releases 16.1.1 and 16.1.0.

This Cluster release supports Mirantis Kubernetes Engine 3.7.6 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.9, in which docker-ee-cli was updated to version 23.0.10 to fix several CVEs.

  • For the list of enhancements and CVE fixes delivered with this patch Cluster release, see 2.26.2

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.1.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.26.2-4.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.7-10.release

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.26.2-3

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-7.release

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-2.release

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-2.release

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-2.release

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-2.release

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-2.release

rook

mirantis.azurecr.io/ceph/rook:v1.12.10-16

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.24.0-47-gf77368e/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.39.19

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.39.19.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.39.19.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.39.19

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-223.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-290.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-86.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.14.8.tgz

telegraf-ds Updated

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-42.tgz

telegraf-s Updated

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-42.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240318062240

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240318062244

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240318145925

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240318062249

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20231204053401

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240318062245

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240318062244

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20240318142141

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240318062249

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240318062246

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240318062249

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240318062240

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-7

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240318062244

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240318062241

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.6

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240318062240

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240318062248

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20231204064415

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240318062250

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240318062249

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20240222083402

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240318062246

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240318145903

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240318062245

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240318062247

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240318062240 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240306130859 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240318062245

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240318062247

System and MCR artifacts
1

Only for bare metal clusters

16.1.1

This section includes release notes for the patch Cluster release 16.1.1 that is introduced in the Container Cloud patch release 2.26.1 and is based on the Cluster release 16.1.0.

This Cluster release supports Mirantis Kubernetes Engine 3.7.5 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.9.

  • For the list of enhancements and CVE fixes delivered with this patch Cluster release, see 2.26.1

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.1.1.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.26.1-1.tgz

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v17.2.7-9.release

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.26.1-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-5.release

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-1.release

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-1.release

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-1.release

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-1.release

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-1.release

rook Updated

mirantis.azurecr.io/ceph/rook:v1.12.10-14

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.24.0-47-gf77368e/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.39.15

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.39.15.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.39.15.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.39.15

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-223.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-285.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch Updated

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-86.tgz

opensearch-dashboards Updated

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-54.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.14.7.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-40.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-41.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240228023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240228023011

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240226135626

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240228023020

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20231204053401

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240228023015

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20240228023011

grafana

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20240228060359

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240228023018

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240228023017

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240228023015

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240228023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-7

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240228023015

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240228023009

openstack-refapp Updated

mirantis.azurecr.io/openstack/openstack-refapp:0.1.6

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240228023020

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240228023015

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20231204064415

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240228023020

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240228023015

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20240222083402

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240228023016

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240226135743

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240228023016

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240228023017

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240228023008 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240219105842 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240228023013

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240228023014

System and MCR artifacts
1

Only for bare metal clusters

16.1.0

This section outlines release notes for the major Cluster release 16.1.0 that is introduced in the Container Cloud release 2.26.0. The Cluster release 16.1.0 supports:

  • Mirantis Kubernetes Engine (MKE) 3.7.5. For details, see MKE Release Notes.

  • Mirantis Container Runtime (MCR) 23.0.9. For details, see MCR Release Notes.

  • Kubernetes 1.27.

For the list of known and addressed issues, refer to the Container Cloud release 2.26.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 16.1.0 that is introduced in the Container Cloud release 2.26.0.

Support for MKE 3.7.5 and MCR 23.0.9

Introduced support for Mirantis Container Runtime (MCR) 23.0.9 and Mirantis Kubernetes Engine (MKE) 3.7.5 that supports Kubernetes 1.27 for the Container Cloud management and managed clusters.

On existing managed clusters, MKE and MCR are updated to the latest supported version when you update your managed cluster to the Cluster release 16.1.0.

Support for Rook v1.12 in Ceph

Added support for Rook v1.12 that contains the Ceph CSI plugin 3.9.x and introduces automated recovery of RBD (RWO) volumes from a failed node onto a new one, ensuring uninterrupted operations.

For a complete list of features introduced in the new Rook version, refer to official Rook documentation.

Support for custom device classes in a Ceph cluster

TechPreview

Implemented the customDeviceClasses parameter that enables you to specify the custom names different from the default ones, which include ssd, hdd, and nvme, and use them in nodes and pools definitions.

Using this parameter, you can, for example, separate storage of large snapshots without touching the rest of Ceph cluster storage.

Network policies for Rook Ceph daemons

To enhance network security, added NetworkPolicy objects for all types of Ceph daemons. These policies allow only specified ports to be used by the corresponding Ceph daemon pods.

Upgraded logging pipeline in StackLight

Completely reorganized and significantly improved the StackLight logging pipeline by implementing the following changes:

  • Switched to the storage-based log retention strategy that optimizes storage utilization and ensures effective data retention. This approach ensures that storage resources are efficiently allocated based on the importance and volume of different data types. The logging index management implies the following advantages:

    • Storage-based rollover mechanism

    • Consistent shard allocation

    • Minimal size of cluster state

    • Storage compression

    • No filter by logging level (filtering by tag is still available)

    • Control over disk space to be taken by indices types:

      • Logs

      • OpenStack notifications

      • Kubernetes events

  • Introduced new system and audit indices that are managed by OpenSearch data streams. It is a convenient way to manage insert-only pipelines such as log message collection.

  • Introduced the OpenSearchStorageUsageCritical and OpenSearchStorageUsageMajor alerts to monitor OpenSearch used and free space from the file system perspective.

  • Introduced the following parameters:

    • persistentVolumeUsableStorageSizeGB to define exclusive OpenSearch node usage

    • output_kind to define the type of logs to be forwarded to external outputs

Important

Changes in the StackLight logging pipeline require the following actions before and after the manged cluster update:

Support for custom labels during alert injection

Added the alertsCommonLabels parameter for Prometheus server that defines the list of custom labels to be injected to firing alerts while they are sent to Alertmanager.

Caution

When new labels are injected, Prometheus sends alert updates with a new set of labels, which can potentially cause Alertmanager to have duplicated alerts for a short period of time if the cluster currently has firing alerts.

Components versions

The following table lists the components versions of the Cluster release 16.1.0.

Component

Application/Service

Version

Cluster orchestration Updated

Mirantis Kubernetes Engine

3.7.5 0

Container runtime Updated

Mirantis Container Runtime

23.0.9 1

Distributed storage Updated

Ceph

17.2.7 (Quincy)

Rook

1.12.10

StackLight

Alerta Updated

9.0.1

Alertmanager

0.25.0

Alertmanager Webhook ServiceNow

0.1

Blackbox Exporter

0.24.0

cAdvisor

0.47.2

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.5.0

Fluentd

1.15.3

Grafana Updated

10.3.1

Grafana Image Renderer Updated

3.8.4

kube-state-metrics Updated

2.10.1

Metric Collector

0.1

Metricbeat

7.12.1

Node Exporter Updated

1.7.0

OAuth2 Proxy

7.1.3

OpenSearch Updated

2.11.0

OpenSearch Dashboards Updated

2.11.1

Prometheus Updated

2.48.0

Prometheus ES Exporter

0.14.0

Prometheus MS Teams

1.5.2

Prometheus Patroni Exporter

0.0.1

Prometheus Postgres Exporter Updated

0.15.0

Prometheus Relay

0.4

sf-notifier

0.4

sf-reporter

0.1

Spilo

13-2.1p9

Telegraf

1.9.1

1.28.5 Updated

Telemeter

4.4

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the artifacts of components included in the Cluster release 16.1.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.26.0-16.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.7-8.release

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.26.0-15

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.9.0-4.release

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.9.2-1.release

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.6.2-1.release

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.3.2-1.release

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.4.2-1.release

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.9.2-1.release

rook

mirantis.azurecr.io/ceph/rook:v1.12.10-13

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.24.0-46-gdaf7dbc/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.39.13

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.39.13.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.39.13.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.39.13

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta Updated

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow Updated

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor Updated

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-7.tgz

elasticsearch-curator Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-219.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-278.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat Updated

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch Updated

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-80.tgz

opensearch-dashboards Updated

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-53.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-258.tgz

prometheus-blackbox-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams Updated

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp Updated

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-16.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.14.2.tgz

telegraf-ds Updated

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-40.tgz

telegraf-s Updated

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-41.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20240201074016

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20240201074016

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20240119023014

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20240201074025

blackbox-exporter Updated

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20231204053401

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20240201074020

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq Removed

n/a

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20231215023011

grafana Updated

mirantis.azurecr.io/stacklight/grafana:10.3.1

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20231124023009

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20240201074025

kube-state-metrics Updated

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20240201074022

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20240201074019

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20240201074016

node-exporter Updated

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-7

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20240201074019

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20240201074016

openstack-refapp Updated

mirantis.azurecr.io/openstack/openstack-refapp:0.1.5

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20240201074024

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20240201074023

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20231204064415

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20240201074021

prometheus-postgres-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20240201074019

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20240117093252

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20240201074022

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20240119124536

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20240201074020

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20240201074021

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20240201074016 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20240201074023 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20240201074019

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20240201074020

System and MCR artifacts
1

Only for bare metal clusters

16.0.x series

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

This section outlines release notes for unsupported major and patch Cluster releases of the 16.0.x series.

16.0.4

This section outlines release notes for the patch Cluster release 16.0.4 that is introduced in the Container Cloud release 2.25.4. and is based on Cluster releases 16.0.0, 16.0.1, 16.0.2, and 16.0.3.

This Cluster release supports Mirantis Kubernetes Engine 3.7.3 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.7.

  • For the list of enhancements and CVE fixes delivered with this patch Cluster release, see 2.25.4

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.0.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.25.4-1

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.6-8.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.25.4-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.8.1-9.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-2.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-2.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-2.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-2.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-2.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.11.11-22

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.23.0-88-g35be0fc/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.38.33

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.38.33.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.38.33.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.38.33

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-6.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-196.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-254.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-63.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-49.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-257.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-13.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.13.12.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-40.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-40.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20231215023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20231215023011

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20231211141923

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20231215023021

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20231204053401

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20231215023012

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq

mirantis.azurecr.io/scale/curl-jq:alpine-20231127081128

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20231215023011

grafana

mirantis.azurecr.io/stacklight/grafana:10.2.2

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20231124023009

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20231215023018

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20231226150248

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20231215023013

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20231215023009

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-6

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20231215023014

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20231215023009

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.4

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20231215023019

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20231215023018

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20231204064415

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20231215023018

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20231215023011

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20231116082249

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20231215023014

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20231211141939

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20231215023013

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20231215023015

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20231215023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20231204142011

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20231215023013

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20231215023013

System and MCR artifacts

Unchanged as compared to 16.0.0

1

Only for bare metal clusters

16.0.3

This section outlines release notes for the patch Cluster release 16.0.3 that is introduced in the Container Cloud release 2.25.3. and is based on Cluster releases 16.0.0, 16.0.1, and 16.0.2.

This Cluster release supports Mirantis Kubernetes Engine 3.7.3 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.7.

  • For the list of enhancements and CVE fixes delivered with this patch Cluster release, see 2.25.3

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.0.3.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.25.3-3

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v17.2.6-8.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.25.3-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.8.1-8.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-2.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-2.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-2.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-2.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-2.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.11.11-21

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.23.0-87-gc9d7d3b/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.38.31

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.38.31.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.38.31.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.38.31

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-6.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-196.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-254.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-63.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-49.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-59.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-257.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-13.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.13.10.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-40.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-40.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20231201023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20231201023012

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20231114075954

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20231201023019

blackbox-exporter Updated

mirantis.azurecr.io/stacklight/blackbox-exporter:0-20231204053401

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20231201023011

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq Updated

mirantis.azurecr.io/scale/curl-jq:alpine-20231127081128

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20231204142422

grafana Updated

mirantis.azurecr.io/stacklight/grafana:10.2.2

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20231124023009

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20231201023018

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20231201023019

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20231201023014

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20231201023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-6

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20231201023011

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20231201023009

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.4

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20231201023014

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v2.48.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20231201023015

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5-20231204064415

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20231201023016

prometheus-postgres-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.15.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20231201023016

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20231116082249

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20231201023011

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20231110023016

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20231207134103

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20231201023015

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20231207133615 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20231204142011 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20231201023015

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20231201023012

System and MCR artifacts

Unchanged as compared to 16.0.0

1

Only for bare metal clusters

16.0.2

This section outlines release notes for the patch Cluster release 16.0.2 that is introduced in the Container Cloud release 2.25.2. and is based on Cluster releases 16.0.0 and 16.0.1.

This Cluster release supports Mirantis Kubernetes Engine 3.7.2 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.7.

  • For the list of enhancements and CVE fixes delivered with this patch Cluster release, see 2.25.2

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.0.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.25.2-3

Docker images

ceph Updated

mirantis.azurecr.io/mirantis/ceph:v17.2.6-5.cve

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.25.2-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.8.1-6.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-2.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-2.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-2.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-2.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-2.cve

rook Updated

mirantis.azurecr.io/ceph/rook:v1.11.11-17

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.23.0-84-g8d74d7c/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.38.29

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.38.29.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.38.29.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.38.29

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-6.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-196.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-254.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-63.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-49.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-57.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-257.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-13.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.13.8.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-40.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-40.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20231117023008

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20231121101237

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20231114075954

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20231117023019

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.24.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20231121100850

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq

mirantis.azurecr.io/scale/curl-jq:alpine-20231019061751

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20231117023010

grafana

mirantis.azurecr.io/stacklight/grafana:9.5.13

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20231030112043

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20231117023017

kube-state-metrics Updated

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.10.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20231117023017

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20231117023011

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20231117023008

node-exporter Updated

mirantis.azurecr.io/stacklight/node-exporter:v1.7.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-6

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20231121103248

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20231121104249

openstack-refapp Updated

mirantis.azurecr.io/openstack/openstack-refapp:0.1.4

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20231117023020

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.44.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20231117023017

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20231117023018

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20231117023012

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20231116082249

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20231117023016

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20231110023016

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20231117023015

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20231117023017

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20231110023008 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20231030132045

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20231117023011

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20231117023011

System and MCR artifacts

Unchanged as compared to 16.0.0

1

Only for bare metal clusters

16.0.1

This section outlines release notes for the patch Cluster release 16.0.1 that is introduced in the Container Cloud release 2.25.1. and is based on the Cluster release 16.0.0.

This Cluster release supports Mirantis Kubernetes Engine 3.7.2 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.7.

  • For the list of enhancements and CVE fixes delivered with this patch Cluster release, see 2.25.1

  • For details on patch release delivery, see Patch releases

This section lists the artifacts of components included in the Cluster release 16.0.1.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.25.1-9

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.6-2.cve

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.25.1-8

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.8.1-4.cve

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-2.cve

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-2.cve

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-2.cve

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-2.cve

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-2.cve

rook

mirantis.azurecr.io/ceph/rook:v1.11.11-15

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.23.0-84-g8d74d7c/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.38.22

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.38.22.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.38.22.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.38.22

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta Updated

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-33.tgz

alertmanager-webhook-servicenow Updated

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-8.tgz

cadvisor Updated

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-6.tgz

elasticsearch-curator Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-15.tgz

elasticsearch-exporter Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-10.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-196.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-254.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.23.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-17.tgz

metricbeat Updated

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-25.tgz

opensearch Updated

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-63.tgz

opensearch-dashboards Updated

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-49.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-57.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-257.tgz

prometheus-blackbox-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-19.tgz

prometheus-es-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-18.tgz

prometheus-msteams Updated

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-12.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-13.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-10.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.13.7.tgz

telegraf-ds Updated

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-40.tgz

telegraf-s Updated

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-40.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-14.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-14.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20231103023010

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20231103023014

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20231027101957

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20231027023014

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.24.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20231027023014

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq Updated

mirantis.azurecr.io/scale/curl-jq:alpine-20231019061751

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20231027023015

grafana Updated

mirantis.azurecr.io/stacklight/grafana:9.5.13

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20231030112043

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1-20231030141315

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20231103023015

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20231103023010

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20231027023009

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.6.0

oauth2-proxy Updated

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-5

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20231103023014

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20231103023010

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.3

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20231103023015

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.44.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20231103023015

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20231103023015

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20231103023010

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20230817113822

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20231027023020

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230911151029

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20231103023014

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20231103023015

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20231103023010 Updated

mirantis.azurecr.io/stacklight/telegraf:1-20231030132045 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20231027023011

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20231103023014

System and MCR artifacts

Unchanged as compared to 16.0.0

1

Only for bare metal clusters

16.0.0

This section outlines release notes for the Cluster release 16.0.0 that is introduced in the Container Cloud release 2.25.0.

This Cluster release supports Mirantis Kubernetes Engine 3.7.1 with Kubernetes 1.27 and Mirantis Container Runtime 23.0.7.

For the list of known and addressed issues, refer to the Container Cloud release 2.25.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 16.0.0 that is introduced in the Container Cloud release 2.25.0.

Support for MKE 3.7.1 and MCR 23.0.7

Introduced support for Mirantis Container Runtime (MCR) 23.0.7 and Mirantis Kubernetes Engine (MKE) 3.7.1 that supports Kubernetes 1.27 for the Container Cloud management and managed clusters. On existing clusters, MKE and MCR are updated to the latest supported version when you update your managed cluster to the Cluster release 16.0.0.

Caution

Support for MKE 3.6.x is dropped. Therefore, new deployments on MKE 3.6.x are not supported.

Detailed view of a Ceph cluster summary in web UI

Implemented the Ceph Cluster details page in the Container Cloud web UI containing the Machines and OSDs tabs with a detailed descriptions and statuses of Ceph machines and Ceph OSDs comprising a Ceph cluster deployment.

Addressing storage devices using by-id identifiers

Implemented the capability to address Ceph storage devices using the by-id identifiers.

The by-id identifier is the only persistent device identifier for a Ceph cluster that remains stable after the cluster upgrade or any other maintenance. Therefore, Mirantis recommends using device by-id symlinks rather than device names or by-path symlinks.

Verbose Ceph cluster status

Added the kaasCephState field in the KaaSCephCluster.status specification to display the current state of KaasCephCluster and any errors during object reconciliation, including specification generation, object creation on a managed cluster, and status retrieval.

Fluentd log forwarding to Splunk

TechPreview

Added initial Technology Preview support for forwarding of Container Cloud services logs, which are sent to OpenSearch by default, to Splunk using the syslog external output configuration.

Ceph monitoring improvements

Implemented the following monitoring improvements for Ceph:

  • Optimized the following Ceph dashboards in Grafana: Ceph Cluster, Ceph Pools, Ceph OSDs.

  • Removed the redundant Ceph Nodes Grafana dashboard. You can view its content using the following dashboards:

    • Ceph stats through the Ceph Cluster dashboard.

    • Resource utilization through the System dashboard, which now includes filtering by Ceph node labels, such as ceph_role_osd, ceph_role_mon, and ceph_role_mgr.

  • Removed the rook_cluster alert label.

  • Removed the redundant CephOSDDown alert.

  • Renamed the CephNodeDown alert to CephOSDNodeDown.

Optimization of StackLight ‘NodeDown’ alerts

Optimized StackLight NodeDown alerts for a better notification handling after cluster recovery from an accident:

  • Reworked the NodeDown-related alert inhibition rules

  • Reworked the logic of all NodeDown-related alerts for all supported groups of nodes, which includes renaming of the <alertName>TargetsOutage alerts to <alertNameTargetDown>

  • Added the TungstenFabricOperatorTargetDown alert for Tungsten Fabric deployments of MOSK clusters

  • Removed redundant KubeDNSTargetsOutage and KubePodsNotReady alerts

OpenSearch performance optimization

Optimized OpenSearch configuration and StackLight datamodel to provide better resources utilization and faster query response. Added the following enhancements:

  • Limited the default namespaces for log collection with the ability to add custom namespaces to the monitoring list using the following parameters:

    • logging.namespaceFiltering.logs - limits the number of namespaces for Pods log collection. Enabled by default.

    • logging.namespaceFiltering.events - limits the number of namespaces for Kubernetes events collection. Disabled by default.

    • logging.namespaceFiltering.events/logs.extraNamespaces - adds extra namespaces, which are not in the default list, to collect specific Kubernetes Pod logs or Kubernetes events. Empty by default.

  • Added the logging.enforceOopsCompression parameter that enforces 32 GB of heap size, unless the defined memory limit allows using 50 GB of heap. Enabled by default.

  • Added the NO_SEVERITY severity label that is automatically added to a log with no severity label in the message. This allows having more control over which logs are actually being processed by Fluentd and which are skipped by mistake.

  • Added documentation on how to tune OpenSearch performance using hardware and software settings for baremetal-based Container Cloud clusters.

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added the documentation on how to export data from the Table panels of Grafana dashboards to CSV.

Components versions

The following table lists the components versions of the Cluster release 16.0.0.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.7.1 0

Container runtime

Mirantis Container Runtime

23.0.7 1

Distributed storage

Ceph

17.2.6 (Quincy)

Rook

1.11.11-13

LCM

helm-controller

1.38.17

lcm-ansible

0.23.0-73-g01aa9b3

lcm-agent

1.38.17

StackLight

Alerta

9.0.0

Alertmanager

0.25.0

Alertmanager Webhook ServiceNow

0.1

Blackbox Exporter

0.24.0

cAdvisor

0.47.2

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.5.0

Fluentd

1.15.3

Grafana

9.5.7

Grafana Image Renderer

3.7.1

kube-state-metrics

2.8.2

Metric Collector

0.1

Metricbeat

7.12.1

Node Exporter

1.6.0

OAuth2 Proxy

7.1.3

OpenSearch

2.8.0

OpenSearch Dashboards

2.7.0

Prometheus

2.44.0

Prometheus ES Exporter

0.14.0

Prometheus MS Teams

1.5.2

Prometheus Patroni Exporter

0.0.1

Prometheus Postgres Exporter

0.12.0

Prometheus Relay

0.4

sf-notifier

0.4

sf-reporter

0.1

Spilo

13-2.1p9

Telegraf

1.9.1

1.27.3

Telemeter

4.4

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the artifacts of components included in the Cluster release 16.0.0.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.25.0-1.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.6-rel-5

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.25.0-0

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.8.1-rel-1

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-cve-1

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-cve-1

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-cve-1

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-cve-1

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-cve-1

rook

mirantis.azurecr.io/ceph/rook:v1.11.11-13

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.23.0-73-g01aa9b3/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.38.17

Helm charts

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.38.17.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.38.17.tgz

Docker images

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.38.17

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-3.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-12.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-7.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-193.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-250.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.17.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-60.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-47.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-54.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-245.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-15.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-13.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-7.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-7.tgz

stacklight

https://binary.mirantis.com/stacklight/helm/stacklight-0.13.3.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-37.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-37.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web

mirantis.azurecr.io/stacklight/alerta-web:9-20230929023008

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0-20230929023012

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230912073324

alpine-utils

mirantis.azurecr.io/stacklight/alpine-utils:1-20230929023018

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.24.0

cadvisor

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230929023009

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq

mirantis.azurecr.io/scale/curl-jq:alpine-20230925094109

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.15-20230929023011

grafana

mirantis.azurecr.io/stacklight/grafana:9.5.7

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230929023011

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230929023017

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.22-20230929023018

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230929023015

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230929023009

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.6.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-4

opensearch

mirantis.azurecr.io/stacklight/opensearch:2-20230929023012

opensearch-dashboards

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20230929023008

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.3

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1-20230929023018

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.44.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230929023017

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230929023018

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230929023016

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20230817113822

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20230929023013

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230911151029

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230929023012

stacklight-toolkit

mirantis.azurecr.io/stacklight/stacklight-toolkit:20231004090138

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230915023009

mirantis.azurecr.io/stacklight/telegraf:1.27-20230809094327

telemeter

mirantis.azurecr.io/stacklight/telemeter:4.4-20230929023011

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230929023012

System and MCR artifacts
1

Only for bare metal clusters

15.x series

This section outlines release notes for unsupported Cluster releases of the 15.x series.

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

15.0.4

This section includes release notes for the patch Cluster release 15.0.3 that is introduced in the Container Cloud patch release 2.24.5 and is based on Cluster releases 15.0.1, 15.0.2, and 15.0.3.

This patch Cluster release introduces MOSK 23.2.3 that is based on Mirantis Kubernetes Engine 3.6.6 with Kubernetes 1.24 and Mirantis Container Runtime 20.10.17.

This section lists the components artifacts of the Cluster release 15.0.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.24.4-8.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.6-cve-1

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.24.4-7

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.8.0-cve-2

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-cve-1

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-cve-1

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-cve-1

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-cve-1

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-cve-1

rook

mirantis.azurecr.io/ceph/rook:v1.11.4-12

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.22.0-75-g08569a8/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.37.25

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.37.25.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.37.25.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.37.25

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-7.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-176.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-231.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.17.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-58.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-47.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-52.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-240.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-7.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.12.13.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-37.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-37.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20230915023010

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20230915023015

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230912073324

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230915023025

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.24.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230915023013

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq Updated

mirantis.azurecr.io/scale/curl-jq:alpine-20230821070620

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230915023013

grafana

mirantis.azurecr.io/stacklight/grafana:9.5.7

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230915023013

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230915023025

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20230915023021

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230915023017

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230915023011

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.6.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-4

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20230915023015

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20230915023009

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.3

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230915023025

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.44.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230915023021

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230915023025

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230915023021

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20230817113822

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230915023010

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230911151029

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230915023014

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230915023021

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230915023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1.27-20230809094327

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230915023020

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230915023020

System and MCR artifacts

Unchanged as compared to 15.0.1

1

Only for existing clusters

15.0.3

This section includes release notes for the patch Cluster release 15.0.3 that is introduced in the Container Cloud patch release 2.24.4 and is based on Cluster releases 15.0.1 and 15.0.2.

This patch Cluster release introduces MOSK 23.2.2 that is based on Mirantis Kubernetes Engine 3.6.6 with Kubernetes 1.24 and Mirantis Container Runtime 20.10.17.

This section lists the components artifacts of the Cluster release 15.0.3.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.24.4-8.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.6-cve-1

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.24.4-7

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.8.0-cve-2

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-cve-1

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-cve-1

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-cve-1

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-cve-1

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-cve-1

rook Updated

mirantis.azurecr.io/ceph/rook:v1.11.4-12

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.22.0-66-ga855169/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.37.24

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.37.24.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.37.24.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.37.24

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-7.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-176.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-231.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.17.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-58.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-47.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-52.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-240.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-7.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.12.10.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-37.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-37.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20230829061227

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20230825023014

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230601043943

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230825023021

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.24.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230825023011

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq

mirantis.azurecr.io/scale/curl-jq:alpine-20230706142802

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230825023012

grafana

mirantis.azurecr.io/stacklight/grafana:9.5.7

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230712154008

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230825023020

keycloak-gatekeeper Removed

n/a

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20230825023019

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230825023018

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230825023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.6.0

oauth2-proxy New

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-4

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20230825023013

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20230825023009

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.3

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230825023021

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.44.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230825023020

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230825023021

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230825023020

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20230817113822

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230825023009

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230601044047

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230825023018

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230825023019

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230825023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1.27-20230809094327

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230825023014

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230825023013

System and MCR artifacts

Unchanged as compared to 15.0.1

1

Only for existing clusters

15.0.2

This section includes release notes for the patch Cluster release 15.0.2 that is introduced in the Container Cloud patch release 2.24.3 and is based on the major Cluster release 15.0.1.

This patch Cluster release introduces MOSK 23.2.1 that is based on Mirantis Kubernetes Engine 3.6.6 with Kubernetes 1.24 and Mirantis Container Runtime 20.10.17, in which docker-ee-cli was updated to version 20.10.18 to fix the following CVEs: CVE-2023-28840, CVE-2023-28642, CVE-2022-41723.

This section lists the components artifacts of the Cluster release 15.0.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.24.3-2.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.6-cve-1

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.24.3-0

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.8.0-cve-1

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-cve-1

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-cve-1

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-cve-1

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-cve-1

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-cve-1

rook

mirantis.azurecr.io/ceph/rook:v1.11.4-11

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.22.0-63-g8f4f248/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.37.23

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.37.23.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.37.23.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.37.23

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-7.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-176.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-231.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.17.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-58.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-47.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-52.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-240.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-7.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.12.9.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-37.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-37.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web

mirantis.azurecr.io/stacklight/alerta-web:9-20230714023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20230811023012

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230601043943

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230811023020

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.24.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230811023011

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq

mirantis.azurecr.io/scale/curl-jq:alpine-20230706142802

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230811023012

grafana Updated

mirantis.azurecr.io/stacklight/grafana:9.5.7

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230712154008

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230811023020

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-5

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20230811023020

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230811023017

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230811023011

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.6.0

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20230811023016

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20230811023009

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.3

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230811023021

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.44.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230811023019

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230811023020

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230811023018

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20230706142757

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230811023011

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230601044047

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230811023016

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230811023013

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230811023008 Updated

mirantis.azurecr.io/stacklight/telegraf:1.27-20230809094327 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230811023013

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230811023014

System and MCR artifacts

Unchanged as compared to 15.0.1

1

Only for existing clusters

15.0.1

This section outlines release notes for the major Cluster release 15.0.1 that is introduced in the Container Cloud release 2.24.2. This Cluster release is based on the Cluster release 14.0.1. The Cluster release 15.0.1 supports:

For the list of known and addressed issues, refer to the Container Cloud release 2.24.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 15.0.1 that is introduced in the Container Cloud release 2.24.2.

Support for MKE 3.6.5 and MCR 20.10.17

Added support for Mirantis Container Runtime (MCR) 20.10.17 and Mirantis Kubernetes Engine (MKE) 3.6.5 that supports Kubernetes 1.24.

An update from the Cluster release 12.7.0 or 12.7.4 to 15.0.1 becomes available through the Container Cloud web UI menu once the related management or regional cluster automatically upgrades to Container Cloud 2.24.2.

Caution

Support for MKE 3.5.x is dropped. Therefore, new deployments on MKE 3.5.x are not supported.

Automatic upgrade of Ceph from Pacific to Quincy

Upgraded Ceph major version from Pacific 16.2.11 to Quincy 17.2.6 with an automatic upgrade of Ceph components on existing managed clusters during the Cluster version update.

Mandatory deviceClass field for Ceph pools

On top of addressing the known issue 30635, introduced a requirement for the deviceClass field in each Ceph pool specification to prevent the issue recurrence. This rule includes all pools in spec.cephClusterSpec.pools, spec.cephClusterSpec.objectStorage, and spec.cephClusterSpec.sharedFilesystem of the pool specification.

Monitoring of network connectivity between Ceph nodes

Introduced healthcheck metrics and the following Ceph alerts to monitor network connectivity between Ceph nodes:

  • CephDaemonSlowOps

  • CephMonClockSkew

  • CephOSDFlapping

  • CephOSDSlowClusterNetwork

  • CephOSDSlowPublicNetwork

Major version update of OpenSearch and OpenSearch Dashboards

Updated OpenSearch and OpenSearch Dashboards from major version 1.3.7 to 2.7.0. The latest version includes a number of enhancements along with bug and security fixes.

Caution

The version update process can take up to 20 minutes, during which both OpenSearch and OpenSearch Dashboards may become temporarily unavailable. Additionally, the KubeStatefulsetUpdateNotRolledOut alert for the opensearch-master StatefulSet may fire for a short period of time.

Note

The end-of-life support of the major version 1.x ends on December 31, 2023.

Components versions

The following table lists the components versions of the Cluster release 15.0.1.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.6.5 0

Container runtime

Mirantis Container Runtime

20.10.17 1

Distributed storage

Ceph

17.2.6 (Quincy)

Rook

1.11.4-10

LCM

helm-controller

1.37.15

lcm-ansible

0.22.0-52-g62235a5

lcm-agent

1.37.15

StackLight

Alerta

9.0.0

Alertmanager

0.25.0

Alertmanager Webhook ServiceNow

0.1

Blackbox Exporter

0.24.0

cAdvisor

0.47.1

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.5.0

Fluentd

1.15.3

Grafana

9.4.9

Grafana Image Renderer

3.7.0

keycloak-gatekeeper

7.1.3-5

kube-state-metrics

2.8.2

Metric Collector

0.1

Metricbeat

7.12.1

Node Exporter

1.6.0

OpenSearch

2.7.0

OpenSearch Dashboards

2.7.0

Prometheus

2.44.0

Prometheus ES Exporter

0.14.0

Prometheus MS Teams

1.5.2

Prometheus Patroni Exporter

0.0.1

Prometheus Postgres Exporter

0.12.0

Prometheus Relay

0.4

sf-notifier

0.3

sf-reporter

0.1

Spilo

13-2.1p9

Telegraf

1.9.1

1.26.2

Telemeter

4.4

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 15.0.1.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.24.0-10.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.6-rel-5

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.24.0-9

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.8.0-rel-3

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-rel-1

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-rel-1

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-rel-1

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-rel-1

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-rel-1

rook

mirantis.azurecr.io/ceph/rook:v1.11.4-10


LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.22.0-49-g9618f2a/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.37.15

Helm charts

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.37.15.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.37.15.tgz

Docker images

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.37.15


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-7.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-175.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-225.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.17.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-58.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-47.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-52.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-240.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-4.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight

https://binary.mirantis.com/stacklight/helm/stacklight-0.12.8.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-37.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-37.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web

mirantis.azurecr.io/stacklight/alerta-web:9-20230602023009

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.25.0

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230601043943

alpine-utils

mirantis.azurecr.io/stacklight/alpine-utils:1-20230602023019

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.24.0

cadvisor

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230602023019

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq

mirantis.azurecr.io/scale/curl-jq:alpine-20230120171102

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.15-20230602023011

grafana

mirantis.azurecr.io/stacklight/grafana:9.4.9

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230418140825

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230602023018

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-5

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.22-20230602023016

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230602111822

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230602023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.6.0

opensearch

mirantis.azurecr.io/stacklight/opensearch:2-20230602023014

opensearch-dashboards

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20230602023014

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.0.1.dev33

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1-20230602023019

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.44.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230602023016

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230602023018

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230602023016

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20230124173121

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230602023012

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230601044047

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230602023015

stacklight-toolkit

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230602123559

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230602023009

mirantis.azurecr.io/stacklight/telegraf:1.26-20230602023017

telemeter

mirantis.azurecr.io/stacklight/telemeter:4.4-20230602023011

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230602023012


1

Only for existing bare metal clusters

14.x series

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

This section outlines release notes for unsupported Cluster releases of the 14.x series.

14.1.0

This section outlines release notes for the Cluster release 14.1.0 that is introduced in the Container Cloud release 2.25.0. This Cluster release is dedicated for the vSphere provider only. This is the last Cluster release for the vSphere provider based on Mirantis Kubernetes Engine 3.6.6 with Kubernetes 1.24.

Important

The major Cluster release 14.1.0 is the last Cluster release for the vSphere provider based on MCR 20.10 and MKE 3.6.6 with Kubernetes 1.24. Therefore, Mirantis highly recommends updating your existing vSphere-based managed clusters to the Cluster release 16.0.1 that contains newer versions on MCR and MKE with Kubernetes. Otherwise, your management cluster upgrade to Container Cloud 2.25.2 will blocked.

For the update procedure, refer to Operations Guide: Update a patch Cluster release of a managed cluster.

Since Container Cloud 2.25.1, the major Cluster release 14.1.0 is deprecated. Greenfield vSphere-based deployments on this Cluster release are not supported. Use the patch Cluster release 16.0.1 for new deployments instead.

For the list of known and addressed issues delivered in the Cluster release 14.1.0, refer to the Container Cloud release 2.25.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 14.1.0 that is introduced in the Container Cloud release 2.25.0.

Support for MCR 23.0.7

Introduced support for Mirantis Container Runtime (MCR) 23.0.7 for the Container Cloud management and managed clusters. On existing clusters, MCR is updated to the latest supported version when you update your managed cluster to the Cluster release 14.1.0.

Addressing storage devices using by-id identifiers

Implemented the capability to address Ceph storage devices using the by-id identifiers.

The by-id identifier is the only persistent device identifier for a Ceph cluster that remains stable after the cluster upgrade or any other maintenance. Therefore, Mirantis recommends using device by-id symlinks rather than device names or by-path symlinks.

Verbose Ceph cluster status

Added the kaasCephState field in the KaaSCephCluster.status specification to display the current state of KaasCephCluster and any errors during object reconciliation, including specification generation, object creation on a managed cluster, and status retrieval.

Fluentd log forwarding to Splunk

TechPreview

Added initial Technology Preview support for forwarding of Container Cloud services logs, which are sent to OpenSearch by default, to Splunk using the syslog external output configuration.

Ceph monitoring improvements

Implemented the following monitoring improvements for Ceph:

  • Optimized the following Ceph dashboards in Grafana: Ceph Cluster, Ceph Pools, Ceph OSDs.

  • Removed the redundant Ceph Nodes Grafana dashboard. You can view its content using the following dashboards:

    • Ceph stats through the Ceph Cluster dashboard.

    • Resource utilization through the System dashboard, which now includes filtering by Ceph node labels, such as ceph_role_osd, ceph_role_mon, and ceph_role_mgr.

  • Removed the rook_cluster alert label.

  • Removed the redundant CephOSDDown alert.

  • Renamed the CephNodeDown alert to CephOSDNodeDown.

Optimization of StackLight ‘NodeDown’ alerts

Optimized StackLight NodeDown alerts for a better notification handling after cluster recovery from an accident:

  • Reworked the NodeDown-related alert inhibition rules

  • Reworked the logic of all NodeDown-related alerts for all supported groups of nodes, which includes renaming of the <alertName>TargetsOutage alerts to <alertNameTargetDown>

  • Added the TungstenFabricOperatorTargetDown alert for Tungsten Fabric deployments of MOSK clusters

  • Removed redundant KubeDNSTargetsOutage and KubePodsNotReady alerts

OpenSearch performance optimization

Optimized OpenSearch configuration and StackLight datamodel to provide better resources utilization and faster query response. Added the following enhancements:

  • Limited the default namespaces for log collection with the ability to add custom namespaces to the monitoring list using the following parameters:

    • logging.namespaceFiltering.logs - limits the number of namespaces for Pods log collection. Enabled by default.

    • logging.namespaceFiltering.events - limits the number of namespaces for Kubernetes events collection. Disabled by default.

    • logging.namespaceFiltering.events/logs.extraNamespaces - adds extra namespaces, which are not in the default list, to collect specific Kubernetes Pod logs or Kubernetes events. Empty by default.

  • Added the logging.enforceOopsCompression parameter that enforces 32 GB of heap size, unless the defined memory limit allows using 50 GB of heap. Enabled by default.

  • Added the NO_SEVERITY severity label that is automatically added to a log with no severity label in the message. This allows having more control over which logs are actually being processed by Fluentd and which are skipped by mistake.

  • Added documentation on how to tune OpenSearch performance using hardware and software settings for baremetal-based Container Cloud clusters.

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added the documentation on how to export data from the Table panels of Grafana dashboards to CSV.

Components versions

The following table lists the components versions of the Cluster release 14.1.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous major Cluster release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration Updated

Mirantis Kubernetes Engine

3.6.6 0

Container runtime Updated

Mirantis Container Runtime

23.0.7 1

Distributed storage

Ceph

17.2.6 (Quincy)

Rook Updated

1.11.11-13

LCM

helm-controller Updated

1.38.17

lcm-ansible Updated

0.23.0-73-g01aa9b3

StackLight

Alerta

9.0.0

Alertmanager

0.25.0

Alertmanager Webhook ServiceNow

0.1

Blackbox Exporter

0.24.0

cAdvisor Updated

0.47.2

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.5.0

Fluentd

1.15.3

Grafana Updated

9.5.7

Grafana Image Renderer Updated

3.7.1

keycloak-gatekeeper Removed

n/a

kube-state-metrics

2.8.2

Metric Collector

0.1

Metricbeat

7.12.1

Node Exporter

1.6.0

OAuth2 Proxy New

7.1.3

OpenSearch Updated

2.8.0

OpenSearch Dashboards

2.7.0

Prometheus

2.44.0

Prometheus ES Exporter

0.14.0

Prometheus MS Teams

1.5.2

Prometheus Patroni Exporter

0.0.1

Prometheus Postgres Exporter

0.12.0

Prometheus Relay

0.4

sf-notifier Updated

0.4

sf-reporter

0.1

Spilo

13-2.1p9

Telegraf

1.9.1

1.27.3 Updated

Telemeter

4.4

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 14.1.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous major Cluster release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.25.0-1.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.6-rel-5

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.25.0-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.8.1-rel-1

cephcsi-registrar Updated

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-cve-1

cephcsi-provisioner Updated

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-cve-1

cephcsi-snapshotter Updated

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-cve-1

cephcsi-attacher Updated

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-cve-1

cephcsi-resizer Updated

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-cve-1

rook Updated

mirantis.azurecr.io/ceph/rook:v1.11.11-13

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.23.0-73-g01aa9b3/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.38.17

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.38.17.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.38.17.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.38.17

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor Updated

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-3.tgz

elasticsearch-curator Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-12.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-7.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-193.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-250.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.17.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch Updated

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-60.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-47.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-54.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-245.tgz

prometheus-blackbox-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-15.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

refapp Updated

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-13.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-7.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-7.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.13.3.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-37.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-37.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20230929023008

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20230929023012

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230912073324

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230929023018

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.24.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230929023009

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq Updated

mirantis.azurecr.io/scale/curl-jq:alpine-20230925094109

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230929023011

grafana Updated

mirantis.azurecr.io/stacklight/grafana:9.5.7

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230929023011

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230929023017

keycloak-gatekeeper Removed

n/a

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20230929023018

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230929023015

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230929023009

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.6.0

oauth2-proxy New

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-4

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20230929023012

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20230929023008

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.3

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230929023018

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.44.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230929023017

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230929023018

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230929023016

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20230817113822

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.4-20230929023013

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230911151029

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230929023012

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20231004090138

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230915023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1.27-20230809094327 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230929023011

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230929023012

System and MCR artifacts
14.0.4

This section includes release notes for the patch Cluster release 14.0.4 that is introduced in the Container Cloud patch release 2.24.5 and is based on Cluster releases 14.0.1, 14.0.2, and 14.0.3.

This patch Cluster release is based on Mirantis Kubernetes Engine 3.6.6 with Kubernetes 1.24 and Mirantis Container Runtime 20.10.17.

  • For the list of CVE fixes delivered with this patch Cluster release, see Container Cloud 2.24.5

  • For details on patch release delivery, see Patch releases

This section lists the components artifacts of the Cluster release 14.0.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.24.4-8.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.6-cve-1

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.24.4-7

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.8.0-cve-2

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-cve-1

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-cve-1

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-cve-1

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-cve-1

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-cve-1

rook

mirantis.azurecr.io/ceph/rook:v1.11.4-12

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.22.0-75-g08569a8/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.37.25

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.37.25.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.37.25.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.37.25

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-7.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-176.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-231.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.17.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-58.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-47.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-52.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-240.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-11.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-7.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.12.13.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-37.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-37.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20230915023010

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20230915023015

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230912073324

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230915023025

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.24.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230915023013

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq Updated

mirantis.azurecr.io/scale/curl-jq:alpine-20230821070620

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230915023013

grafana

mirantis.azurecr.io/stacklight/grafana:9.5.7

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230915023013

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230915023025

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20230915023021

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230915023017

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230915023011

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.6.0

oauth2-proxy

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-4

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20230915023015

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20230915023009

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.3

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230915023025

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.44.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230915023021

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230915023025

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230915023021

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20230817113822

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230915023010

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230911151029

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230915023014

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230915023021

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230915023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1.27-20230809094327

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230915023020

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230915023020

System and MCR artifacts

Unchanged as compared to 14.0.1

1

Only for bare metal clusters

2

Only for existing bare metal clusters

14.0.3

This section includes release notes for the patch Cluster release 14.0.3 that is introduced in the Container Cloud patch release 2.24.4 and is based on Cluster releases 14.0.1 and 14.0.2.

This patch Cluster release is based on Mirantis Kubernetes Engine 3.6.6 with Kubernetes 1.24 and Mirantis Container Runtime 20.10.17.

  • For the list of enhancements and CVE fixes delivered with this patch Cluster release, see 2.24.4

  • For details on patch release delivery, see Patch releases

This section lists the components artifacts of the Cluster release 14.0.3.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.24.4-8.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.6-cve-1

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.24.4-7

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.8.0-cve-2

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-cve-1

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-cve-1

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-cve-1

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-cve-1

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-cve-1

rook Updated

mirantis.azurecr.io/ceph/rook:v1.11.4-12

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.22.0-66-ga855169/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.37.24

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.37.24.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.37.24.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.37.24

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-7.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-176.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-231.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.17.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-58.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-47.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-52.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-240.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-11.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-7.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.12.10.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-37.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-37.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20230829061227

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20230825023014

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230601043943

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230825023021

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.24.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230825023011

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq

mirantis.azurecr.io/scale/curl-jq:alpine-20230706142802

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230825023012

grafana

mirantis.azurecr.io/stacklight/grafana:9.5.7

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230712154008

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230825023020

keycloak-gatekeeper Removed

n/a

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20230825023019

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230825023018

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230825023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.6.0

oauth2-proxy New

mirantis.azurecr.io/iam/oauth2-proxy:v7.1.3-4

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20230825023013

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20230825023009

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.3

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230825023021

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.44.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230825023020

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230825023021

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230825023020

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20230817113822

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230825023009

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230601044047

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230825023018

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230825023019

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230825023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1.27-20230809094327

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230825023014

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230825023013

System and MCR artifacts

Unchanged as compared to 14.0.1

1

Only for bare metal clusters

2

Only for existing bare metal clusters

14.0.2

This section includes release notes for the patch Cluster release 14.0.2 that is introduced in the Container Cloud patch release 2.24.3 and is based on the Cluster release 14.0.1.

This patch Cluster release is based on Mirantis Kubernetes Engine 3.6.6 with Kubernetes 1.24 and Mirantis Container Runtime 20.10.17, in which docker-ee-cli was updated to version 20.10.18 to fix the following CVEs: CVE-2023-28840, CVE-2023-28642, CVE-2022-41723.

  • For the list of enhancements and CVE fixes delivered with this patch Cluster release, see 2.24.3

  • For details on patch release delivery, see Patch releases

This section lists the components artifacts of the Cluster release 14.0.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart Updated

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.24.3-2.tgz

Docker images Updated

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.6-cve-1

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.24.3-0

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.8.0-cve-1

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-cve-1

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-cve-1

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-cve-1

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-cve-1

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-cve-1

rook

mirantis.azurecr.io/ceph/rook:v1.11.4-11

LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.22.0-63-g8f4f248/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.37.23

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.37.23.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.37.23.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.37.23

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-7.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-176.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-231.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.17.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-58.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-47.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-52.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-240.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-11.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-7.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.12.9.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-37.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-37.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web

mirantis.azurecr.io/stacklight/alerta-web:9-20230714023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20230811023012

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230601043943

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230811023020

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.24.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230811023011

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq

mirantis.azurecr.io/scale/curl-jq:alpine-20230706142802

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230811023012

grafana Updated

mirantis.azurecr.io/stacklight/grafana:9.5.7

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230712154008

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230811023020

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-5

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20230811023020

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230811023017

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230811023011

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.6.0

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20230811023016

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20230811023009

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.1.3

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230811023021

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.44.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230811023019

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230811023020

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230811023018

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20230706142757

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230811023011

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230601044047

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230811023016

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230811023013

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230811023008 Updated

mirantis.azurecr.io/stacklight/telegraf:1.27-20230809094327 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230811023013

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230811023014

System and MCR artifacts

Unchanged as compared to 14.0.1

1

Only for bare metal clusters

2

Only for existing bare metal clusters

14.0.1

This section outlines release notes for the major Cluster release 14.0.1 that is introduced in the Container Cloud release 2.24.2.

This Cluster release supports Mirantis Kubernetes Engine 3.6.5 with Kubernetes 1.24 and Mirantis Container Runtime 20.10.17.

The Cluster release 14.0.1 is based on 14.0.0 introduced in Container Cloud 2.24.0. The only difference between these two 14.x releases is that 14.0.1 contains the following updated LCM and StackLight artifacts to address critical CVEs:

  • StackLight chart - stacklight/helm/stacklight-0.12.8.tgz

  • LCM Ansible image - lcm-ansible-v0.22.0-52-g62235a5

For For the list of enhancements, refer to the Cluster release 14.0.0. For For the list of known and addressed issues, refer to the Container Cloud release 2.24.0 section.

Components versions

The following table lists the components versions of the Cluster release 14.0.1.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.6.5 0

Container runtime

Mirantis Container Runtime

20.10.17 1

Distributed storage

Ceph

17.2.6 (Quincy)

Rook

1.11.4-10

LCM

helm-controller

1.37.15

lcm-ansible Updated

0.22.0-52-g62235a5

lcm-agent

1.37.15

StackLight

Alerta

9.0.0

Alertmanager

0.25.0

Alertmanager Webhook ServiceNow

0.1

Blackbox Exporter

0.24.0

cAdvisor

0.47.1

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.5.0

Fluentd

1.15.3

Grafana Updated

9.5.5

Grafana Image Renderer Updated

3.7.1

keycloak-gatekeeper

7.1.3-5

kube-state-metrics

2.8.2

Metric Collector

0.1

Metricbeat

7.12.1

Prometheus Node Exporter

1.6.0

OpenSearch Updated

2.8.0

OpenSearch Dashboards

2.7.0

Prometheus

2.44.0

Prometheus ES Exporter

0.14.0

Prometheus MS Teams

1.5.2

Prometheus Patroni Exporter

0.0.1

Prometheus Postgres Exporter

0.12.0

Prometheus Relay

0.4

sf-notifier

0.3

sf-reporter

0.1

Spilo

13-2.1p9

Telegraf

1.9.1

1.26.2

Telemeter

4.4

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 14.0.1.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.24.0-10.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.6-rel-5

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.24.0-9

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.8.0-rel-3

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-rel-1

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-rel-1

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-rel-1

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-rel-1

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-rel-1

rook

mirantis.azurecr.io/ceph/rook:v1.11.4-10


LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible Updated

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.22.0-52-g62235a5/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.37.15

Helm charts

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.37.15.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.37.15.tgz

Docker images

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.37.15


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-7.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-176.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-231.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.17.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-58.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-47.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-52.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-240.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-11.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-7.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.12.8.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-37.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-37.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20230714023009

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0-20230717144436

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230601043943

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230714023021

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.24.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230714023020

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq Updated

mirantis.azurecr.io/scale/curl-jq:alpine-20230706142802

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230714023011

grafana Updated

mirantis.azurecr.io/stacklight/grafana:9.5.5

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230712154008

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230714023021

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-5

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20230714023020

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230714023015

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230714023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.6.0

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:2-20230707023015

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20230719110228

openstack-refapp Updated

mirantis.azurecr.io/openstack/openstack-refapp:0.1.3

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230714023021

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.44.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230714023018

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230714023020

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230714023016

psql-client Updated

mirantis.azurecr.io/scale/psql-client:v13-20230706142757

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230714113914

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230601044047

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230717125456

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230714023018

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230714023010 Updated

mirantis.azurecr.io/stacklight/telegraf:1.26-20230602023017

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230714023014

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230714023016


1

Only for bare metal clusters

2

Only for existing bare metal clusters

14.0.0

This section outlines release notes for the Cluster release 14.0.0 that is introduced in the Container Cloud release 2.24.0.

This Cluster release supports Mirantis Kubernetes Engine 3.6.5 with Kubernetes 1.24 and Mirantis Container Runtime 20.10.17.

For the list of known and addressed issues, refer to the Container Cloud release 2.24.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 14.0.0 that is introduced in the Container Cloud release 2.24.0.

Support for MKE 3.6.5 and MCR 20.10.17

Introduced support for Mirantis Container Runtime (MCR) 20.10.17 and Mirantis Kubernetes Engine (MKE) 3.6.5 that supports Kubernetes 1.24 for the Container Cloud management, regional, and managed clusters. On existing clusters, MKE and MCR are updated to the latest supported version when you update your managed cluster to the Cluster release 14.0.0.

Caution

Support for MKE 3.5.x is dropped. Therefore, new deployments on MKE 3.5.x are not supported.

Note

For MOSK-based deployments, the feature support is available since MOSK 23.2.

Automatic upgrade of Ceph from Pacific to Quincy

Upgraded Ceph major version from Pacific 16.2.11 to Quincy 17.2.6 with an automatic upgrade of Ceph components on existing managed clusters during the Cluster version update.

Note

For MOSK-based deployments, the feature support is available since MOSK 23.2.

Ceph non-admin client for a shared Ceph cluster

Implemented a Ceph non-admin client to share the producer cluster resources with the consumer cluster in the shared Ceph cluster configuration. The use of the non-admin client, as opposed to the admin client, prevents the risk of destructive actions from the consumer cluster.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature is not supported yet.

Dropping of redundant Ceph components from management and regional clusters

As the final part of Ceph removal from Container Cloud management clusters, which reduces resource consumption, removed the following Ceph components that were present on clusters for backward compatibility:

  • Helm chart of the Ceph Controller (ceph-operator)

  • Ceph deployments

  • Ceph namespaces ceph-lcm-mirantis and rook-ceph

Mandatory deviceClass field for Ceph pools

On top of addressing the known issue 30635, introduced a requirement for the deviceClass field in each Ceph pool specification to prevent the issue recurrence. This rule includes all pools in spec.cephClusterSpec.pools, spec.cephClusterSpec.objectStorage, and spec.cephClusterSpec.sharedFilesystem of the pool specification.

Monitoring of network connectivity between Ceph nodes

Introduced healthcheck metrics and the following Ceph alerts to monitor network connectivity between Ceph nodes:

  • CephDaemonSlowOps

  • CephMonClockSkew

  • CephOSDFlapping

  • CephOSDSlowClusterNetwork

  • CephOSDSlowPublicNetwork

Note

For MOSK-based deployments, the feature support is available since MOSK 23.2.

Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Changed severity for multiple alerts to increase visibility of potentially workload-impacting alerts and decrease noise of non-workload-impacting alerts

  • Renamed MCCLicenseExpirationCritical to MCCLicenseExpirationHigh, MCCLicenseExpirationMajor to MCCLicenseExpirationMedium

  • For Ironic:

    • Removed IronicBmMetricsMissing in favor of IronicBmApiOutage

    • Removed inhibition rules for IronicBmTargetDown and IronicBmApiOutage

    • Improved expression for IronicBmApiOutage

  • For Kubernetes applications:

    • Reworked troubleshooting steps for KubeStatefulSetUpdateNotRolledOut, KubeDeploymentOutage, KubeDeploymentReplicasMismatch

    • Updated descriptions for KubeStatefulSetOutage and KubeDeploymentOutage

    • Changed expressions for KubeDeploymentOutage, KubeDeploymentReplicasMismatch, CephOSDDiskNotResponding, and CephOSDDown

Major version update of OpenSearch and OpenSearch Dashboards

Updated OpenSearch and OpenSearch Dashboards from major version 1.3.7 to 2.7.0. The latest version includes a number of enhancements along with bug and security fixes.

Note

For MOSK-based deployments, the feature support is available since MOSK 23.2.

Caution

The version update process can take up to 20 minutes, during which both OpenSearch and OpenSearch Dashboards may become temporarily unavailable. Additionally, the KubeStatefulsetUpdateNotRolledOut alert for the opensearch-master StatefulSet may fire for a short period of time.

Note

The end-of-life support of the major version 1.x ends on December 31, 2023.

Performance tuning of Grafana dashboards

Tuned the performance of Grafana dashboards for faster loading and a better UX by refactoring and optimizing different Grafana dashboards.

This enhancement includes extraction of the OpenSearch Indices dashboard out of the OpenSearch dashboard to provide detailed information about the state of indices, including their size, the size of document values and segments.

Dropped and white-listed metrics

To improve Prometheus performance and provide better resource utilization with faster query response, dropped metrics that are unused by StackLight. Also created the default white list of metrics that you can expand.

The feature is enabled by default using the prometheusServer.metricsFiltering.enabled:true parameter. Thus, if you have created custom alerts, recording rules, dashboards, or if you were actively using some metrics for different purposes, some of those metrics can be dropped. Therefore, verify the white list of Prometheus scrape jobs to ensure that the required metrics are not dropped.

If a job name that relates to the required metric is not present in this list, its target metrics are not dropped and are collected by Prometheus by default. If the required metric is not present in this list, you can whitelist it using the prometheusServer.metricsFiltering.extraMetricsInclude parameter.

Components versions

The following table lists the components versions of the Cluster release 14.0.0.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.6.5 0

Container runtime

Mirantis Container Runtime

20.10.17 1

Distributed storage

Ceph

17.2.6 (Quincy)

Rook

1.11.4-10

LCM

helm-controller

1.37.15

lcm-ansible

0.22.0-49-g9618f2a

lcm-agent

1.37.15

StackLight

Alerta

9.0.0

Alertmanager

0.25.0

Alertmanager Webhook ServiceNow

0.1

Blackbox Exporter

0.24.0

cAdvisor

0.47.1

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.5.0

Fluentd

1.15.3

Grafana

9.4.9

Grafana Image Renderer

3.7.0

keycloak-gatekeeper

7.1.3-5

kube-state-metrics

2.8.2

Metric Collector

0.1

Metricbeat

7.12.1

Node Exporter

1.6.0

OpenSearch

2.7.0

OpenSearch Dashboards

2.7.0

Prometheus

2.44.0

Prometheus ES Exporter

0.14.0

Prometheus MS Teams

1.5.2

Prometheus Patroni Exporter

0.0.1

Prometheus Postgres Exporter

0.12.0

Prometheus Relay

0.4

sf-notifier

0.3

sf-reporter

0.1

Spilo

13-2.1p9

Telegraf

1.9.1

1.26.2

Telemeter

4.4

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 14.0.0.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.24.0-10.tgz

Docker images

ceph

mirantis.azurecr.io/mirantis/ceph:v17.2.6-rel-5

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.24.0-9

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.8.0-rel-3

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.8.0-rel-1

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.5.0-rel-1

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-rel-1

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.3.0-rel-1

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.8.0-rel-1

rook

mirantis.azurecr.io/ceph/rook:v1.11.4-10


LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.22.0-49-g9618f2a/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.37.15

Helm charts

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.37.15.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.37.15.tgz

Docker images

helm-controller

mirantis.azurecr.io/core/lcm-controller:1.37.15


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-7.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-175.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-225.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.17.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-58.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-47.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-52.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-240.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-11.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-4.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight

https://binary.mirantis.com/stacklight/helm/stacklight-0.12.6.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-37.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-37.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web

mirantis.azurecr.io/stacklight/alerta-web:9-20230602023009

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.25.0

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230601043943

alpine-utils

mirantis.azurecr.io/stacklight/alpine-utils:1-20230602023019

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.24.0

cadvisor

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230602023019

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

curl-jq

mirantis.azurecr.io/scale/curl-jq:alpine-20230120171102

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch-exporter:v1.5.0

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.15-20230602023011

grafana

mirantis.azurecr.io/stacklight/grafana:9.4.9

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230418140825

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230602023018

keycloak-gatekeeper

mirantis.azurecr.io/iam/keycloak-gatekeeper:7.1.3-5

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.22-20230602023016

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230602111822

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230602023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.6.0

opensearch

mirantis.azurecr.io/stacklight/opensearch:2-20230602023014

opensearch-dashboards

mirantis.azurecr.io/stacklight/opensearch-dashboards:2-20230602023014

openstack-refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.0.1.dev33

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1-20230602023019

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.44.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230602023016

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230602023018

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230602023016

psql-client

mirantis.azurecr.io/scale/psql-client:v13-20230124173121

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230602023012

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230601044047

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230602023015

stacklight-toolkit

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230602123559

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230602023009

mirantis.azurecr.io/stacklight/telegraf:1.26-20230602023017

telemeter

mirantis.azurecr.io/stacklight/telemeter:4.4-20230602023011

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230602023012


1

Only for bare metal clusters

2

Only for existing bare metal clusters

12.x series

This section outlines release notes for the unsupported Cluster releases of the 12.x series. Cluster releases ending with a zero, for example, 12.x.0, are major releases. Cluster releases ending with with a non-zero, for example, 12.x.1, are patch releases of a major release 12.x.0.

12.7.x series

This section outlines release notes for unsupported Cluster releases of the 12.7.x series.

12.7.4

This section includes release notes for the patch Cluster release 12.7.4 that is introduced in the Container Cloud patch release 2.23.5 and is based on the Cluster release 12.7.0. This patch Cluster release supports MOSK 23.1.4.

This section lists the components artifacts of the Cluster release 12.7.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.23.5-1.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v16.2.11-cve-4

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.23.5-0

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.7.2-cve-4

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.2.0-cve-2

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.4.0-cve-2

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.7.0-cve-2

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.7.0-cve-2

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-cve-2

rook

mirantis.azurecr.io/ceph/rook:v1.10.10-10

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.21.0-39-g5b167de/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.36.27

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.36.27.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.36.27.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/helm-controller:1.36.27

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-170.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-200.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.16.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-52.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-44.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-48.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-229.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-4.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.11.9.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20230523144245

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.25.0

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230519023013

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230519023021

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.23.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230519023020

cerebro

mirantis.azurecr.io/stacklight/cerebro:v0.9-20230505023015

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch_exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230519023011

grafana

mirantis.azurecr.io/stacklight/grafana:9.4.9

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230418140825

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230519023019

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20230519023019

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230330133800

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230519023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.5.0

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:1-20230523124159

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20230519023015

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230519023020

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.40.7

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230519023018

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230519023019

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230519023015

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230523144230

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230403174259

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230519023017

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230519023016

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230519023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1.26-20230523091335 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230519023012

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230519023015

yq

mirantis.azurecr.io/stacklight/yq:4.33.2

System and MCR artifacts

Unchanged as compared to 12.7.0

12.7.3

This section includes release notes for the patch Cluster release 12.7.3 that is introduced in the Container Cloud patch release 2.23.4 and is based on the Cluster release 12.7.0. This patch Cluster release supports MOSK 23.1.3.

This section lists the components artifacts of the Cluster release 12.7.3.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.23.4-4.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v16.2.11-cve-4

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.23.4-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.7.2-cve-4

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.2.0-cve-2

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.4.0-cve-2

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.7.0-cve-2

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.7.0-cve-2

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-cve-2

rook

mirantis.azurecr.io/ceph/rook:v1.10.10-10

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.21.0-39-g5b167de/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.36.26

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.36.26.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.36.26.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/helm-controller:1.36.26

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-170.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-200.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.16.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-52.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-44.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-48.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-229.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-4.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.11.7.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20230505023008

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.25.0

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230505023012

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230505023019

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.23.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230505023018

cerebro Updated

mirantis.azurecr.io/stacklight/cerebro:v0.9-20230505023015

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch_exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230428063240

grafana Updated

mirantis.azurecr.io/stacklight/grafana:9.4.9

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230418140825

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230505023018

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20230505023017

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230330133800

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230505023009

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.5.0

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:1-20230505023014

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20230505023013

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230505023019

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.40.7

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230505023016

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230505023017

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230505023012

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230505023013

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230403174259

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230404125347

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230505023015

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230505023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1.26.1-20230505023017 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230505023010

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230505023010

yq

mirantis.azurecr.io/stacklight/yq:4.33.2

System and MCR artifacts

Unchanged as compared to 12.7.0

12.7.2

This section includes release notes for the patch Cluster release 12.7.2 that is introduced in the Container Cloud patch release 2.23.3 and is based on the Cluster release 12.7.0. This patch Cluster release supports MOSK 23.1.2.

  • For details on MOSK 23.1.2, see MOSK documentation: Release Notes

  • For CVE fixes delivered with this patch Cluster release, see security notes for 2.23.3

  • For CVE fixes delivered with the previous patch Cluster release, see security notes for 2.23.2

  • For details on patch release delivery, see Patch releases

This section lists the components artifacts of the Cluster release 12.7.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.23.3-2.tgz

Docker images Updated

ceph

mirantis.azurecr.io/ceph/ceph:v16.2.11-cve-4

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.23.3-0

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.7.2-cve-3

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.2.0-cve-2

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.4.0-cve-2

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.7.0-cve-2

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.7.0-cve-2

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-cve-2

rook

mirantis.azurecr.io/ceph/rook:v1.10.10-10

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.21.0-39-g5b167de/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.36.23

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.36.23.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.36.23.tgz

Docker images

helm-controller Updated

mirantis.azurecr.io/core/helm-controller:1.36.23

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-170.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-200.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.16.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-52.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-44.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-48.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-229.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-4.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.11.6.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20230414023009

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.25.0

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230414023012

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230414023019

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.23.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230414023019

cerebro

mirantis.azurecr.io/stacklight/cerebro:v0.9-20230316081755

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch_exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230414023011

grafana

mirantis.azurecr.io/stacklight/grafana:9.4.7

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230418140825

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230414023019

kube-state-metrics Updated

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20230414023019

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230330133800

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230417102535

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.5.0

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:1-20230414023016

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20230414023010

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230414023019

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.40.7

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230414023017

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230414023019

prometheus-postgres-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230414023019

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230414023014

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230403174259

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230404125347

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230414023017

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230414023010 Updated

mirantis.azurecr.io/stacklight/telegraf:1.26.1-20230414023019 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230414023013

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230414023013

yq Updated

mirantis.azurecr.io/stacklight/yq:4.33.2

System and MCR artifacts

Unchanged as compared to 12.7.0

12.7.1

This section outlines release notes for the patch Cluster release 12.7.1 that is introduced in the Container Cloud patch release 2.23.2 and is based on the Cluster release 12.7.0. This patch Cluster release supports MOSK 23.1.1.

This section lists the components artifacts of the Cluster release 12.7.1. For artifacts of the Container Cloud release, see Container Cloud release 2.23.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.23.2-7.tgz

Docker images Updated

ceph

mirantis.azurecr.io/ceph/ceph:v16.2.11-cve-2

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.23.2-6

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.7.2-cve-1

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.2.0-cve-1

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.4.0-cve-1

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.7.0-cve-1

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.7.0-cve-1

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-cve-1

rook

mirantis.azurecr.io/ceph/rook:v1.10.10-9

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.21.0-39-g5b167de/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.36.14

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.36.14.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.36.14.tgz

Docker images

helm-controller Updated

mirantis.azurecr.io/core/helm-controller:1.36.14

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-170.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-194.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.16.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-52.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-44.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-48.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-229.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-4.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.11.5.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:8-20230331023009

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.25.0

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230331023013

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230331023020

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.23.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230331023021

cerebro Updated

mirantis.azurecr.io/stacklight/cerebro:v0.9-20230316081755

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator Updated

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch_exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230331023012

grafana Updated

mirantis.azurecr.io/stacklight/grafana:9.4.7

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230310145607

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230331023020

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.7.0

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20230331023019

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230330133800

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230331123540

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.5.0

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:1-20230403060750

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20230403060759

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230331023020

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.40.7

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230331023015

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.1

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230331023020

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230331023018

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230331023014

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230403174259

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230404125347

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230331023016

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230331023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1.23.4-20230317023017 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230331023013

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230331023015

yq Updated

mirantis.azurecr.io/stacklight/yq:4.32.2

System and MCR artifacts

Unchanged as compared to 12.7.0

12.7.0

This section outlines release notes for the Cluster release 12.7.0 that is introduced in the Container Cloud release 2.23.1. This Cluster release is based on the Cluster release 11.7.0.

The Cluster release 12.7.0 supports:

For the list of known and resolved issues, refer to the Container Cloud release 2.23.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 12.7.0 that is introduced in the Container Cloud release 2.23.1.

MKE patch release update

Updated the Mirantis Kubernetes Engine (MKE) patch release from 3.5.5 to 3.5.7. The MKE update occurs automatically when you update your managed cluster.

Automatic upgrade of Ceph from Octopus to Pacific

Upgraded Ceph major version from Octopus 15.2.17 to Pacific 16.2.11 with an automatic upgrade of Ceph components on existing managed clusters during the Cluster version update.

Caution

Since Ceph Pacific, while mounting an RBD or CephFS volume, CSI drivers do not propagate the 777 permission on the mount path.

Two Ceph Managers by default for HA

Increased the default number of Ceph Managers deployed on a Ceph cluster to two, active and stand-by, to improve fault tolerance and HA.

On existing clusters, the second Ceph Manager deploys automatically after a managed cluster update.

Note

Mirantis recommends labeling at least 3 Ceph nodes with the mgr role that equals the default number of Ceph nodes for the mon role. In such configuration, one back-up Ceph node will be available to redeploy a failed Ceph Manager in case of a server outage.

Bond interfaces monitoring

Implemented monitoring of bond interfaces for clusters based on bare metal. The number of active and configured slaves per bond is now monitored with the following alerts raising in case of issues:

  • BondInterfaceDown

  • BondInterfaceSlaveDown

  • BondInterfaceOneSlaveLeft

  • BondInterfaceOneSlaveConfigured

Calculation of storage retention time using OpenSearch and Prometheus panels

Implemented the following panels in the Grafana dashboards for OpenSearch and Prometheus that provide details on the storage usage and allow calculating the possible retention time based on provisioned storage and average usage:

  • OpenSearch dashboard:

    • Cluster > Estimated Retention

    • Resources > Disk

    • Resources > File System Used Space by Percentage

    • Resources > Stored Indices Disk Usage

    • Resources > Age of Logs

  • Prometheus dashboard:

    • Cluster > Estimated Retention

    • Resources > Storage

    • Resources > Strage by Percentage

HA setup for ‘iam-proxy’ in StackLight

Implemented deployment of two iam-proxy instances for the StackLight HA setup that ensures access to HA components if one iam-proxy instance fails. The second iam-proxy instance is automatically deployed during cluster update on existing StackLight HA deployments.

Log forwarding to third-party systems using Fluentd plugins

Added the capability to forward logs to external Elasticsearch and OpenSearch servers as the fluentd-logs output. This enhancement also expands existing configuration options for log forwarding to syslog.

Introduced logging.externalOutputs that deprecates logging.syslog and enables you to configure any number of outputs with more configuration flexibility.

‘MCC Applications Performance’ Grafana dashboard for StackLight

Implemented the MCC Applications Performance Grafana dashboard that provides information on the Container Cloud internals work based on Golang, controller runtime, and some custom metrics. You can use it to verify performance of applications and for troubleshooting purposes.

Components versions

The following table lists the components versions of the Cluster release 12.7.0. For major components and versions of the Container Cloud release, see Container Cloud release 2.23.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.5.7 0

Container runtime

Mirantis Container Runtime

20.10.13 1

Distributed storage Updated

Ceph

16.2.11 (Pacific)

Rook

1.0.0-20230120144247

LCM

Helm

2.16.11-40

helm-controller Updated

1.36.3

lcm-ansible Updated

0.21.0-39-g5b167de

lcm-agent Updated

1.36.3

StackLight

Alerta Updated

8.5.0

Alertmanager Updated

0.25.0

Alertmanager Webhook ServiceNow Updated

0.1

Blackbox Exporter Updated

0.23.0

cAdvisor New

0.46.0

Cerebro Updated

0.9.4

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd Updated

1.15.3

Grafana Updated

9.1.8

Grafana Image Renderer Updated

3.6.1

kube-state-metrics New

2.7.0

IAM Proxy

6.0.1

Metric Collector Updated

0.1

Metricbeat Updated

7.10.2

Node Exporter Updated

1.5.0

OpenSearch Updated

1.3.7

OpenSearch Dashboards Updated

1.3.7

Prometheus Updated

2.40.7

Prometheus ES Exporter Updated

0.14.0

Prometheus MS Teams

1.4.2

Prometheus Patroni Exporter Updated

0.0.1

Prometheus Postgres Exporter

0.9.0

Prometheus Relay Updated

0.4

sf-notifier Updated

0.3

sf-reporter Updated

0.1

Spilo Updated

13-2.1p9

Telegraf

1.9.1 Updated

1.23.4 Updated

Telemeter Updated

4.4

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 12.7.0. For artifacts of the Container Cloud release, see Container Cloud release 2.23.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.23-12.tgz

Docker images Updated

ceph

mirantis.azurecr.io/ceph/ceph:v16.2.11

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.23-11

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.7.2

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.5.1

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v3.3.0

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v6.1.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v4.0.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.6.0

rook

mirantis.azurecr.io/ceph/rook:v1.0.0-20230120144247


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.21.0-39-g5b167de/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.36.3

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.36.3.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.36.3.tgz

Docker images

helm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.36.3


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta Updated

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor New

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-170.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-194.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.14.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat Updated

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-52.tgz

opensearch-dashboards Updated

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-44.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-48.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-229.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-4.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.11.3.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20230206172055

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0.25.0

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230206145038

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230203125601

blackbox-exporter Updated

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.23.0

cadvisor New

mirantis.azurecr.io/stacklight/cadvisor:v0.46.0

cerebro Updated

mirantis.azurecr.io/stacklight/cerebro:v0.9-20230203125548

configmap-reload Updated

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator Updated

mirantis.azurecr.io/stacklight/curator:5.7.6-20230206171950

elasticsearch_exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230203125530

grafana

mirantis.azurecr.io/stacklight/grafana:9.1.8

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.6.1-20221103105602

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.21.0-20230206130934

kube-state-metrics Updated

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.7.0

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.22.13

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20221227141656

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20230203125534

node-exporter Updated

mirantis.azurecr.io/stacklight/node-exporter:v1.5.0

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:1-20230203125541

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20230203125528

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230203125558

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v2.40.7

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230206130434

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230203125555

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230203125553

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230206130301

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230206133637

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230203124803

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230203125546

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230203125527 Updated

mirantis.azurecr.io/stacklight/telegraf:1.23.4-20220915114529

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230203125536

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230203125540

yq Updated

mirantis.azurecr.io/stacklight/yq:4.30.6


See also

Patch releases

12.5.0

This section outlines release notes for the Cluster release 12.5.0 that is introduced in the Container Cloud release 2.21.1. This Cluster release is based on the Cluster release 11.5.0.

The Cluster release 12.5.0 supports:

For the list of known and resolved issues, refer to the Container Cloud release 2.21.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 12.5.0 that is introduced in the Container Cloud release 2.21.1.

Support for MKE 3.5.5 and MCR 20.10.13

Added support for the Mirantis Kubernetes Engine (MKE) 3.5.5 with Kubernetes 2.21 and the Mirantis Container Runtime (MCR) version 20.10.13.

An update from the Cluster release 8.10.0 to 12.5.0 becomes available through the Container Cloud web UI menu once the related management or regional cluster automatically upgrades to Container Cloud 2.21.1.

MetalLB minor version update

Updated the MetalLB version from 0.12.1 to 0.13.4 to apply the latest enhancements. The MetalLB configuration is now stored in dedicated MetalLB objects instead of the ConfigMap object.

Enhanced etcd monitoring

Improved etcd monitoring by implementing the Etcd dashboard and etcdDbSizeCritical and etcdDbSizeMajor alerts that inform about the size of the etcd database.

Components versions

The following table lists the components versions of the Cluster release 12.5.0. For major components and versions of the Container Cloud release, see Container Cloud release 2.21.0.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.5.5 0

Container runtime

Mirantis Container Runtime

20.10.13 1

Distributed storage

Ceph

15.2.17 (Octopus)

Rook

1.0.0-20220809220209

LCM

Helm

2.16.11-40

helm-controller

0.3.0-327-gbc30b11b

lcm-ansible

0.19.0-12-g6cad672

lcm-agent

0.3.0-327-gbc30b11b

StackLight

Alerta

8.5.0-20220923121625

Alertmanager

0.23.0

Alertmanager Webhook ServiceNow

0.1-20220706035316

Cerebro

0.9-20220923122026

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.14-20220922214003

Grafana

9.0.2

Grafana Image Renderer

3.5.0

IAM Proxy

6.0.1

Metric Collector

0.1-20220711134630

Metricbeat

7.10.2-20220909091002

OpenSearch

1-20220517112057

OpenSearch Dashboards

1-20220517112107

Prometheus

2.35.0

Prometheus Blackbox Exporter

0.19.0

Prometheus ES Exporter

0.14.0-20220517111946

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20220624102731

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

sf-notifier

0.3-20220706035002

sf-reporter

0.1-20220916113234

Spilo

13-2.1p1-20220921105803

Telegraf

1.9.1-20221107155248

1.23.4-20220915114529

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 12.5.0. For artifacts of the Container Cloud release, see Container Cloud release 2.21.0.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-964.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.17

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20221024145202

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook:v1.0.0-20220809220209


LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.19.0-12-g6cad672/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-327-gbc30b11b/lcm-agent

Helm charts

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.34.16.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.34.16.tgz

Docker images

helm-controller

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-327-gbc30b11b


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-37.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-142.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-173.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.13.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-52.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-40.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-42.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-229.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-4.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-5.tgz

stacklight

https://binary.mirantis.com/stacklight/helm/stacklight-0.9.3.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20220923121625

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20220706035316

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.19.0

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:v0.9-20220923122026

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch_exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.14-20220922214003

grafana

mirantis.azurecr.io/stacklight/grafana:9.0.2

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.5.0

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.15.9

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.22.13

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20220711134630

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20220909091002

nginx-prometheus-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

opensearch

mirantis.azurecr.io/stacklight/opensearch:1-20220517112057

opensearch-dashboards

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20220517112107

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.35.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20220517111946

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20220624102731

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20220706035002

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20220916113234

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p1-20220921105803

stacklight-toolkit

mirantis.azurecr.io/stacklight/stacklight-toolkit:20220729121446

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20221107155248

mirantis.azurecr.io/stacklight/telegraf:1.23.4-20220915114529

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:4.25.2


0

Only for existing bare metal clusters

11.x series

This section outlines release notes for the unsupported Cluster releases of the 11.x series. Cluster releases ending with a zero, for example, 11.x.0, are major releases. Cluster releases ending with with a non-zero, for example, 11.x.1, are patch releases of a major release 11.x.0.

11.7.x series

This section outlines release notes for unsupported Cluster releases of the 11.7.x series.

11.7.4

This section includes release notes for the patch Cluster release 11.7.4 that is introduced in the Container Cloud patch release 2.23.5 and is based on the Cluster release 11.7.0.

  • For CVE fixes delivered with this patch Cluster release, see security notes for 2.23.5

  • For CVE fixes delivered with the previous patch Cluster releases, see security notes for 2.23.4, 2.23.3, and 2.23.2

  • For details on patch release delivery, see Patch releases

This section lists the components artifacts of the Cluster release 11.7.4.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.23.5-1.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v16.2.11-cve-4

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.23.5-0

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.7.2-cve-4

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.2.0-cve-2

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.4.0-cve-2

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.7.0-cve-2

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.7.0-cve-2

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-cve-2

rook

mirantis.azurecr.io/ceph/rook:v1.10.10-10

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.21.0-39-g5b167de/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.36.27

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.36.27.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.36.27.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/helm-controller:1.36.27

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-170.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-200.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.16.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-52.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-44.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-48.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-229.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-9.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-4.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.11.9.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20230523144245

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.25.0

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230519023013

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230519023021

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.23.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230519023020

cerebro

mirantis.azurecr.io/stacklight/cerebro:v0.9-20230505023015

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch_exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230519023011

grafana

mirantis.azurecr.io/stacklight/grafana:9.4.9

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230418140825

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230519023019

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20230519023019

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230330133800

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230519023010

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.5.0

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:1-20230523124159

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20230519023015

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230519023020

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.40.7

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230519023018

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230519023019

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230519023015

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230523144230

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230403174259

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230519023017

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230519023016

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230519023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1.26-20230523091335 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230519023012

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230519023015

yq

mirantis.azurecr.io/stacklight/yq:4.33.2

System and MCR artifacts

Unchanged as compared to 11.7.0

1

Only for bare metal

2

Only for existing bare metal clusters

11.7.3

This section includes release notes for the patch Cluster release 11.7.3 that is introduced in the Container Cloud patch release 2.23.4 and is based on the Cluster release 11.7.0.

  • For CVE fixes delivered with this patch Cluster release, see security notes for 2.23.4

  • For CVE fixes delivered with the previous patch Cluster releases, see security notes for 2.23.3 and 2.23.2

  • For details on patch release delivery, see Patch releases

This section lists the components artifacts of the Cluster release 11.7.3.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.23.4-4.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v16.2.11-cve-4

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.23.4-0

cephcsi Updated

mirantis.azurecr.io/mirantis/cephcsi:v3.7.2-cve-4

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.2.0-cve-2

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.4.0-cve-2

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.7.0-cve-2

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.7.0-cve-2

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-cve-2

rook

mirantis.azurecr.io/ceph/rook:v1.10.10-10

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.21.0-39-g5b167de/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.36.26

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.36.26.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.36.26.tgz

Docker images Updated

helm-controller

mirantis.azurecr.io/core/helm-controller:1.36.26

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-170.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-200.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.16.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-52.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-44.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-48.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-229.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-9.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-4.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.11.7.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20230505023008

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.25.0

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230505023012

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230505023019

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.23.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230505023018

cerebro Updated

mirantis.azurecr.io/stacklight/cerebro:v0.9-20230505023015

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch_exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230428063240

grafana Updated

mirantis.azurecr.io/stacklight/grafana:9.4.9

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230418140825

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230505023018

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20230505023017

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230330133800

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230505023009

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.5.0

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:1-20230505023014

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20230505023013

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230505023019

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.40.7

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230505023016

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230505023017

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230505023012

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230505023013

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230403174259

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230404125347

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230505023015

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230505023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1.26.1-20230505023017 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230505023010

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230505023010

yq

mirantis.azurecr.io/stacklight/yq:4.33.2

System and MCR artifacts

Unchanged as compared to 11.7.0

1

Only for bare metal

2

Only for existing bare metal clusters

11.7.2

This section includes release notes for the patch Cluster release 11.7.2 that is introduced in the Container Cloud patch release 2.23.3 and is based on the Cluster release 11.7.0.

  • For CVE fixes delivered with this patch Cluster release, see security notes for 2.23.3

  • For CVE fixes delivered with the previous patch Cluster release, see security notes for 2.23.2

  • For details on patch release delivery, see Patch releases

This section lists the components artifacts of the Cluster release 11.7.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.23.3-2.tgz

Docker images Updated

ceph

mirantis.azurecr.io/ceph/ceph:v16.2.11-cve-4

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.23.3-0

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.7.2-cve-3

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.2.0-cve-2

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.4.0-cve-2

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.7.0-cve-2

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.7.0-cve-2

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-cve-2

rook

mirantis.azurecr.io/ceph/rook:v1.10.10-10

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.21.0-39-g5b167de/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.36.23

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.36.14.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.36.14.tgz

Docker images

helm-controller Updated

mirantis.azurecr.io/core/helm-controller:1.36.23

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-170.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-200.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.16.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-52.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-44.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-48.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-229.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-9.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-4.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.11.6.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:9-20230414023009

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.25.0

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230414023012

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230414023019

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.23.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230414023019

cerebro

mirantis.azurecr.io/stacklight/cerebro:v0.9-20230316081755

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch_exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230414023011

grafana

mirantis.azurecr.io/stacklight/grafana:9.4.7

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230418140825

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230414023019

kube-state-metrics Updated

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.8.2

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20230414023019

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230330133800

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230417102535

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.5.0

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:1-20230414023016

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20230414023010

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230414023019

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.40.7

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230414023017

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230414023019

prometheus-postgres-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.12.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230414023019

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230414023014

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230403174259

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230404125347

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230414023017

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230414023010 Updated

mirantis.azurecr.io/stacklight/telegraf:1.26.1-20230414023019 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230414023013

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230414023013

yq Updated

mirantis.azurecr.io/stacklight/yq:4.33.2

System and MCR artifacts

Unchanged as compared to 11.7.0

1

Only for bare metal

2

Only for existing bare metal clusters

11.7.1

This section outlines release notes for the patch Cluster release 11.7.1 that is introduced in the Container Cloud patch release 2.23.2 and is based on the Cluster release 11.7.0. For the list of CVE fixes delivered with this patch Cluster release, see 2.23.2. For details on patch release delivery, see Patch releases.

This section lists the components artifacts of the Cluster release 11.7.1. For artifacts of the Container Cloud release, see Container Cloud release 2.23.2.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.23.2-7.tgz

Docker images Updated

ceph

mirantis.azurecr.io/ceph/ceph:v16.2.11-cve-2

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.23.2-6

cephcsi

mirantis.azurecr.io/mirantis/cephcsi:v3.7.2-cve-1

cephcsi-attacher

mirantis.azurecr.io/mirantis/cephcsi-attacher:v4.2.0-cve-1

cephcsi-provisioner

mirantis.azurecr.io/mirantis/cephcsi-provisioner:v3.4.0-cve-1

cephcsi-registrar

mirantis.azurecr.io/mirantis/cephcsi-registrar:v2.7.0-cve-1

cephcsi-resizer

mirantis.azurecr.io/mirantis/cephcsi-resizer:v1.7.0-cve-1

cephcsi-snapshotter

mirantis.azurecr.io/mirantis/cephcsi-snapshotter:v6.2.1-cve-1

rook

mirantis.azurecr.io/ceph/rook:v1.10.10-9

LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.21.0-39-g5b167de/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/core/bin/lcm-agent-1.36.14

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.36.14.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.36.14.tgz

Docker images

helm-controller Updated

mirantis.azurecr.io/core/helm-controller:1.36.14

StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-170.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-194.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.16.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-52.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-44.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-48.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-229.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

refapp

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-9.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-4.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.11.5.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:8-20230331023009

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.25.0

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230331023013

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230331023020

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.23.0

cadvisor Updated

mirantis.azurecr.io/stacklight/cadvisor:v0.47-20230331023021

cerebro Updated

mirantis.azurecr.io/stacklight/cerebro:v0.9-20230316081755

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator Updated

mirantis.azurecr.io/stacklight/curator:5.7.6-20230404082402

elasticsearch_exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230331023012

grafana Updated

mirantis.azurecr.io/stacklight/grafana:9.4.7

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3-20230310145607

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.22-20230331023020

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.7.0

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22-20230331023019

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20230330133800

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.12.1-20230331123540

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.5.0

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:1-20230403060750

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20230403060759

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230331023020

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.40.7

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230331023015

prometheus-msteams Updated

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.5.1

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230331023020

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230331023018

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230331023014

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230403174259

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230404125347

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230331023016

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230331023009 Updated

mirantis.azurecr.io/stacklight/telegraf:1.23.4-20230317023017 Updated

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230331023013

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230331023015

yq Updated

mirantis.azurecr.io/stacklight/yq:4.32.2

System and MCR artifacts

Unchanged as compared to 11.7.0

1

Only for bare metal

2

Only for existing bare metal clusters

11.7.0

This section outlines release notes for the Cluster release 11.7.0 that is introduced in the Mirantis Container Cloud release 2.23.0.

This Cluster release supports Mirantis Kubernetes Engine 3.5.7 with Kubernetes 1.21 and Mirantis Container Runtime 20.10.13.

Enhancements

This section outlines new features implemented in the Cluster release 11.7.0 that is introduced in the Container Cloud release 2.23.0.

MKE patch release update

Updated the Mirantis Kubernetes Engine (MKE) version from 3.5.5 to 3.5.7 for the Container Cloud management, regional, and managed clusters on all supported cloud providers, as well as for non Container Cloud based MKE cluster attachment.

Note

For MOSK-based deployments, the feature support is available since MOSK 23.1.

Automatic upgrade of Ceph from Octopus to Pacific

Upgraded Ceph major version from Octopus 15.2.17 to Pacific 16.2.11 with an automatic upgrade of Ceph components on existing managed clusters during the Cluster version update.

Caution

Since Ceph Pacific, while mounting an RBD or CephFS volume, CSI drivers do not propagate the 777 permission on the mount path.

Note

For MOSK-based deployments, the feature support is available since MOSK 23.1.

HA setup for ‘iam-proxy’ in StackLight

Implemented deployment of two iam-proxy instances for the StackLight HA setup that ensures access to HA components if one iam-proxy instance fails. The second iam-proxy instance is automatically deployed during cluster update on existing StackLight HA deployments.

Note

For MOSK-based deployments, the feature support is available since MOSK 23.1.

Log forwarding to third-party systems using Fluentd plugins

Added the capability to forward logs to external Elasticsearch and OpenSearch servers as the fluentd-logs output. This enhancement also expands existing configuration options for log forwarding to syslog.

Introduced logging.externalOutputs that deprecates logging.syslog and enables you to configure any number of outputs with more configuration flexibility.

Note

For MOSK-based deployments, the feature support is available since MOSK 23.1.

‘MCC Applications Performance’ Grafana dashboard for StackLight

Implemented the MCC Applications Performance Grafana dashboard that provides information on the Container Cloud internals work based on Golang, controller runtime, and some custom metrics. You can use it to verify performance of applications and for troubleshooting purposes.

Note

For MOSK-based deployments, the feature support is available since MOSK 23.1.

PVC configuration for Reference Application

Implemented the following options that enable configuration of persistent volumes for Reference Application :

  • refapp.workload.persistentVolumeEnabled

  • refapp.workload.persistentVolumeSize

Note

The refapp.workload.persistentVolumeEnabled option is enabled by default and is recommended for production clusters.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature is not supported yet.

Components versions

The following table lists the components versions of the Cluster release 11.7.0. For major components and versions of the Container Cloud release, see Container Cloud release 2.23.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.5.7 0

Container runtime

Mirantis Container Runtime

20.10.13 1

Distributed storage Updated

Ceph

16.2.11 (Pacific)

Rook

1.0.0-20230120144247

LCM

Helm

2.16.11-40

helm-controller Updated

1.36.3

lcm-ansible Updated

0.21.0-39-g5b167de

lcm-agent Updated

1.36.3

StackLight

Alerta Updated

8.5.0

Alertmanager Updated

0.25.0

Alertmanager Webhook ServiceNow Updated

0.1

Blackbox Exporter Updated

0.23.0

cAdvisor

0.46.0

Cerebro

0.9.4

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.15.3

Grafana

9.1.8

Grafana Image Renderer Updated

3.6.1

kube-state-metrics Updated

2.7.0

IAM Proxy

6.0.1

Metric Collector Updated

0.1

Metricbeat

7.10.2

Node Exporter Updated

1.5.0

OpenSearch Updated

1.3.7

OpenSearch Dashboards Updated

1.3.7

Prometheus Updated

2.40.7

Prometheus ES Exporter Updated

0.14.0

Prometheus MS Teams

1.4.2

Prometheus Patroni Exporter

0.0.1

Prometheus Postgres Exporter

0.9.0

Prometheus Relay Updated

0.4

sf-notifier Updated

0.3

sf-reporter Updated

0.1

Spilo Updated

13-2.1p9

Telegraf

1.9.1 Updated

1.23.4 Updated

Telemeter

4.4

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 11.7.0. For artifacts of the Container Cloud release, see Container Cloud release 2.23.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.23-12.tgz

Docker images Updated

ceph

mirantis.azurecr.io/ceph/ceph:v16.2.11

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:2.23-11

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.7.2

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.5.1

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v3.3.0

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v6.1.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v4.0.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.6.0

rook

mirantis.azurecr.io/ceph/rook:v1.0.0-20230120144247


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.21.0-39-g5b167de/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/core/bin/lcm-agent-1.36.3

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.36.3.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.36.3.tgz

Docker images

helm-controller Updated

mirantis.azurecr.io/core/lcm-controller:1.36.3


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta Updated

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-29.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-49.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-170.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-194.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.14.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-52.tgz

opensearch-dashboards Updated

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-44.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-48.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-229.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

refapp Updated

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-9.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-4.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.11.3.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20230206172055

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0.25.0

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20230206145038

alpine-utils Updated

mirantis.azurecr.io/stacklight/alpine-utils:1-20230203125601

blackbox-exporter Updated

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.23.0

cadvisor

mirantis.azurecr.io/stacklight/cadvisor:v0.46.0

cerebro Updated

mirantis.azurecr.io/stacklight/cerebro:v0.9-20230203125548

configmap-reload Updated

mirantis.azurecr.io/stacklight/configmap-reload:v0.8.0

curator Updated

mirantis.azurecr.io/stacklight/curator:5.7.6-20230206171950

elasticsearch_exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20230203125530

grafana

mirantis.azurecr.io/stacklight/grafana:9.1.8

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.6.1-20221103105602

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.21.0-20230206130934

kube-state-metrics Updated

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.7.0

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.22.13

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20221227141656

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20230203125534

node-exporter Updated

mirantis.azurecr.io/stacklight/node-exporter:v1.5.0

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:1-20230203125541

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20230203125528

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20230203125558

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v2.40.7

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20230206130434

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-20230203125555

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.4-20230203125553

refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.0.1.dev29

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20230206130301

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20230206133637

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p9-20230203124803

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20230203125546

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20230203125527 Updated

mirantis.azurecr.io/stacklight/telegraf:1.23.4-20220915114529

telemeter Updated

mirantis.azurecr.io/stacklight/telemeter:4.4-20230203125536

telemeter-token-auth Updated

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20230203125540

yq Updated

mirantis.azurecr.io/stacklight/yq:4.30.6


1

Only for bare metal

2

Only for existing bare metal clusters

For the list of known and addressed issues, refer to the Container Cloud release 2.23.0 section.

See also

Patch releases

11.6.0

This section outlines release notes for the Cluster release 11.6.0 that is introduced in the Mirantis Container Cloud release 2.22.0.

This Cluster release supports Mirantis Kubernetes Engine 3.5.5 with Kubernetes 1.21 and Mirantis Container Runtime 20.10.13.

Enhancements

This section outlines new features implemented in the Cluster release 11.6.0 that is introduced in the Container Cloud release 2.22.0.

Bond interfaces monitoring

Implemented monitoring of bond interfaces for clusters based on bare metal and Equinix Metal with public or private networking. The number of active and configured slaves per bond is now monitored with the following alerts raising in case of issues:

  • BondInterfaceDown

  • BondInterfaceSlaveDown

  • BondInterfaceOneSlaveLeft

  • BondInterfaceOneSlaveConfigured

Note

For MOSK-based deployments, the feature support is available since MOSK 23.1.

Calculation of storage retention time using OpenSearch and Prometheus panels

Implemented the following panels in the Grafana dashboards for OpenSearch and Prometheus that provide details on the storage usage and allow calculating the possible retention time based on provisioned storage and average usage:

  • OpenSearch dashboard:

    • Cluster > Estimated Retention

    • Resources > Disk

    • Resources > File System Used Space by Percentage

    • Resources > Stored Indices Disk Usage

    • Resources > Age of Logs

  • Prometheus dashboard:

    • Cluster > Estimated Retention

    • Resources > Storage

    • Resources > Strage by Percentage

Note

For MOSK-based deployments, the feature support is available since MOSK 23.1.

Deployment of cAdvisor as a StackLight component

Added cAdvisor to the StackLight deployment on any type of Container Cloud cluster that allows gathering metrics about usage of container resources.

Container Cloud web UI support for Reference Application

Enhanced support for Reference Application that is designed for workload monitoring on managed clusters adding the Enable Reference Application check box to the StackLight tab of the Create new cluster wizard in the Container Cloud web UI.

You can also enable this option after deployment using the Configure cluster menu of the Container Cloud web UI or using CLI by editing the StackLight parameters in the Cluster object.

The Reference Application enhancement also comprises switching from MariaDB to PostgreSQL to improve the application stability and performance.

Note

Reference Application requires the following resources per cluster on top of the main product requirements:

  • Up to 1 GiB of RAM

  • Up to 3 GiB of storage

General availability of Ceph Shared File System

Completed the development of the Ceph Shared File System (CephFS) feature. CephFS provides the capability to create read/write shared file system Persistent Volumes (PVs).

Caution

For MKE clusters that are part of MOSK infrastructure, the feature is not supported yet.

Support of shared Ceph clusters

TechPreview

Implemented a mechanism connecting a consumer cluster to a producer cluster. The consumer cluster uses the Ceph cluster deployed on the producer cluster to store the necessary data.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature is not supported yet.

Sharing of a Ceph cluster with attached MKE clusters

Implemented the ability to share a Ceph cluster with MKE clusters that were not originally deployed by Container Cloud and are attached to the management cluster. Shared Ceph clusters allow providing the Ceph-based CSI driver to MKE clusters. Both ReadWriteOnce (RWO) and ReadWriteMany (RWX) access modes are supported with shared Ceph clusters.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature is not supported yet.

Two Ceph Managers by default for HA

Increased the default number of Ceph Managers deployed on a Ceph cluster to two, active and stand-by, to improve fault tolerance and HA.

On existing clusters, the second Ceph Manager deploys automatically after a managed cluster update.

Note

Mirantis recommends labeling at least 3 Ceph nodes with the mgr role that equals the default number of Ceph nodes for the mon role. In such configuration, one back-up Ceph node will be available to redeploy a failed Ceph Manager in case of a server outage.

Note

For MOSK-based deployments, the feature support is available since MOSK 23.1.

Components versions

The following table lists the components versions of the Cluster release 11.6.0. For major components and versions of the Container Cloud release, see Container Cloud release 2.22.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.5.5 0

Container runtime

Mirantis Container Runtime

20.10.13 1

Distributed storage

Ceph

15.2.17 (Octopus)

Rook

1.0.0-20220809220209

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-352-gf55d6378

lcm-ansible Updated

0.20.1-2-g9148ac3

lcm-agent Updated

0.3.0-352-gf55d6378

StackLight

Alerta Updated

8.5.0-20221122164956

Alertmanager

0.23.0

Alertmanager Webhook ServiceNow Updated

0.1-20221124153923

Blackbox Exporter

0.19.0

cAdvisor New

0.46.0

Cerebro Updated

0.9.4

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd Updated

1.15.3

Grafana Updated

9.1.8

Grafana Image Renderer Updated

3.6.1-20221103105602

kube-state-metrics

2.2.4

IAM Proxy

6.0.1

Metric Collector Updated

0.1-20221115143126

Metricbeat Updated

7.10.2

Node Exporter

1.2.2

OpenSearch Updated

1-20221129201140

OpenSearch Dashboards Updated

1-20221213070555

Prometheus

2.35.0

Prometheus ES Exporter Updated

0.14.0-20221028070923

Prometheus MS Teams

1.4.2

Prometheus NGINX Exporter Removed

n/a

Prometheus Node Exporter Renamed to Node Exporter

n/a

Prometheus Patroni Exporter Updated

0.0.1

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

sf-notifier Updated

0.3-20221103105502

sf-reporter Updated

0.1-20221128192801

Spilo

13-2.1p1-20220921105803

Telegraf

1.9.1-20221107155248 Updated

1.23.4-20220915114529

Telemeter Updated

4.4

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 11.6.0. For artifacts of the Container Cloud release, see Container Cloud release 2.22.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcc-2.22-3.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.17

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20221221183423

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook:v1.0.0-20220809220209


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.20.1-2-g9148ac3/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-352-gf55d6378/lcm-agent

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.35.11.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.35.11.tgz

Docker images

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-352-gf55d6378


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta Updated

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-27.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cadvisor New

https://binary.mirantis.com/stacklight/helm/cadvisor-0.1.0-mcp-2.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-44.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-156.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-191.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.13.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-52.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-40.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-45.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-229.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

prometheus-nginx-exporter Removed

n/a

refapp Updated

https://binary.mirantis.com/scale/helm/refapp-0.2.1-mcp-1.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-4.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.10.6.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20221122164956

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20221124153923

alpine-utils New

mirantis.azurecr.io/stacklight/alpine-utils:1-20221213101955

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.19.0

busybox Removed

n/a

cadvisor New

mirantis.azurecr.io/stacklight/cadvisor:v0.46.0

cerebro Updated

mirantis.azurecr.io/stacklight/cerebro:v0.9-20221028114642

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curator Updated

mirantis.azurecr.io/stacklight/curator:5.7.6-20221125180652

curl Removed

n/a

curl-jq Removed

n/a

elasticsearch_exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.15-20221205103417

grafana Updated

mirantis.azurecr.io/stacklight/grafana:9.1.8

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.6.1-20221103105602

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.21.0-20221122115008

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.22.13

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20221115143126

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20221208132713

nginx-prometheus-exporter Removed

n/a

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:1-20221129201140

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20221213070555

origin-telemeter Removed

n/a

pgbouncer Updated

mirantis.azurecr.io/stacklight/pgbouncer:1-20221116202249

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.35.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20221028070923

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:0.0.1-2022111wont-fix/8112512

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

refapp

mirantis.azurecr.io/openstack/openstack-refapp:0.0.1.dev29

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20221103105502

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20221128192801

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p1-20220921105803

stacklight-toolkit Updated

mirantis.azurecr.io/stacklight/stacklight-toolkit:20221202065207

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20221107155248 Updated

mirantis.azurecr.io/stacklight/telegraf:1.23.4-20220915114529

telemeter New

mirantis.azurecr.io/stacklight/telemeter:4.4-20221129100512

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq Updated

mirantis.azurecr.io/stacklight/yq:4.30.5


1

Only for bare metal and Equinix Metal with private networking

2

Only for existing bare metal clusters

For the list of known and addressed issues, refer to the Container Cloud release 2.22.0 section.

11.5.0

This section outlines release notes for the Cluster release 11.5.0 that is introduced in the Mirantis Container Cloud release 2.21.0.

This Cluster release supports Mirantis Kubernetes Engine 3.5.5 with Kubernetes 1.21 and Mirantis Container Runtime 20.10.13.

Enhancements

This section outlines new features implemented in the Cluster release 11.5.0 that is introduced in the Container Cloud release 2.21.0.

MKE and MCR patch release update

Updated the Mirantis Kubernetes Engine (MKE) version from 3.5.4 to 3.5.5 and the Mirantis Container Runtime (MCR) version from 20.10.12 to 20.10.13 for the Container Cloud management, regional, and managed clusters on all supported cloud providers, as well as for non Container Cloud based MKE cluster attachment.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

MetalLB minor version update

Updated the MetalLB version from 0.12.1 to 0.13.4 for the Container Cloud management, regional, and managed clusters of all cloud providers that use MetalLB: bare metal, Equinix Metal with public and private networking, vSphere.

The MetalLB configuration is now stored in dedicated MetalLB objects instead of the ConfigMap object.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

Enhanced etcd monitoring

Improved etcd monitoring by implementing the Etcd dashboard and etcdDbSizeCritical and etcdDbSizeMajor alerts that inform about the size of the etcd database.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

Reference Application for workload monitoring

Implemented Reference Application that is a small microservice application that enables workload monitoring on non-MOSK managed clusters. It mimics a classical microservice application and provides metrics that describe the likely behavior of user workloads.

Reference Application contains a set of alerts and a separate Grafana dashboard to provide check statuses of Reference Application and statistics such as response time and content length.

The feature is disabled by default and can be enabled using the StackLight configuration manifest.

Ceph secrets specification in the Ceph cluster status

Added the miraCephSecretsInfo specification to KaaSCephCluster.status. This specification contains current state and details of secrets that are used in the Ceph cluster, such as keyrings, Ceph clients, RADOS Gateway user credentials, and so on.

Using miraCephSecretsInfo, you can create, access, and remove Ceph RADOS Block Device (RBD) or Ceph File System (CephFS) clients and RADOS Gateway (RGW) users.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature is not supported yet.

Amazon S3 bucket policies for Ceph Object Storage users

Implemented the ability to create and configure Amazon S3 bucket policies between Ceph Object Storage users.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

Components versions

The following table lists the components versions of the Cluster release 11.5.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration Updated

Mirantis Kubernetes Engine

3.5.5 0

Container runtime Updated

Mirantis Container Runtime

20.10.13 1

Distributed storage Updated

Ceph

15.2.17 (Octopus)

Rook

1.0.0-20220809220209

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-327-gbc30b11b

lcm-ansible Updated

0.19.0-12-g6cad672

lcm-agent Updated

0.3.0-327-gbc30b11b

StackLight

Alerta Updated

8.5.0-20220923121625

Alertmanager

0.23.0

Alertmanager Webhook ServiceNow

0.1-20220706035316

Cerebro Updated

0.9-20220923122026

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd Updated

1.14-20220922214003

Grafana

9.0.2

Grafana Image Renderer Updated

3.5.0

IAM Proxy

6.0.1

Metric Collector

0.1-20220711134630

Metricbeat Updated

7.10.2-20220909091002

OpenSearch

1-20220517112057

OpenSearch Dashboards

1-20220517112107

Prometheus

2.35.0

Prometheus Blackbox Exporter

0.19.0

Prometheus ES Exporter

0.14.0-20220517111946

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20220624102731

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

Reference Application New

0.0.1

sf-notifier

0.3-20220706035002

sf-reporter Updated

0.1-20220916113234

Spilo Updated

13-2.1p1-20220921105803

Telegraf

1.9.1-20220714080809

1.23.4-20220915114529 Updated

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 11.5.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-964.tgz

Docker images

ceph Updated

mirantis.azurecr.io/ceph/ceph:v15.2.17

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20221024145202

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook Updated

mirantis.azurecr.io/ceph/rook:v1.0.0-20220809220209


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.19.0-12-g6cad672/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-327-gbc30b11b/lcm-agent

Helm charts Updated

helm-controller

https://binary.mirantis.com/core/helm/helm-controller-1.34.16.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.34.16.tgz

Docker images

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-327-gbc30b11b


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow Updated

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-37.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-142.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-173.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.13.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-52.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-40.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-42.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-229.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams Updated

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

refapp New

https://binary.mirantis.com/scale/helm/refapp-0.1.1-mcp-1.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-4.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-5.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.9.2.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20220923121625

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20220706035316

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.19.0

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro Updated

mirantis.azurecr.io/stacklight/cerebro:v0.9-20220923122026

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch_exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.14-20220922214003

grafana

mirantis.azurecr.io/stacklight/grafana:9.0.2

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.5.0

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.15.9

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22.13

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20220711134630

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20220909091002

nginx-prometheus-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

opensearch

mirantis.azurecr.io/stacklight/opensearch:1-20220517112057

opensearch-dashboards

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20220517112107

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1.12.0

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.35.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20220517111946

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20220624102731

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

refapp New

mirantis.azurecr.io/openstack/openstack-refapp:0.0.1.dev29

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20220706035002

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20220916113234

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p1-20220921105803

stacklight-toolkit New

mirantis.azurecr.io/stacklight/stacklight-toolkit:20220729121446

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20220714080809

mirantis.azurecr.io/stacklight/telegraf:1.23.4-20220915114529 Updated

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:4.25.2


1

Only for bare metal and Equinix Metal with private networking

2

Only for existing bare metal clusters

For the list of known and resolved issues, refer to the Container Cloud release 2.21.0 section.

11.4.0

This section outlines release notes for the Cluster release 11.4.0 that is introduced in the Mirantis Container Cloud release 2.20.0.

This Cluster release supports Mirantis Kubernetes Engine 3.5.4 with Kubernetes 1.21 and Mirantis Container Runtime 20.10.12.

Enhancements

This section outlines new features implemented in the Cluster release 11.4.0 that is introduced in the Container Cloud release 2.20.0.

MKE and MCR version update

Updated the Mirantis Kubernetes Engine (MKE) version from 3.5.3 to 3.5.4 and the Mirantis Container Runtime (MCR) version from 20.10.11 to 20.10.12 for the Container Cloud management, regional, and managed clusters on all supported cloud providers, as well as for non Container Cloud based MKE cluster attachment.

Ceph removal from management and regional clusters

To reduce resource consumption, removed Ceph cluster deployment from management and regional clusters based on bare metal and Equinix Metal with private networking. Ceph is automatically removed during the Cluster release update to 11.4.0. Managed clusters continue using Ceph as a distributed storage system.

Creation of Ceph RADOS Gateway users

Implemented the objectUsers RADOS Gateway parameter in the KaaSCephCluster CR. The new parameter allows for an easy creation of custom Ceph RADOS Gateway users with permission rules. The users parameter is now deprecated and, if specified, will be automatically transformed to objectUsers.

Custom RBD map options

Implemented the rbdDeviceMapOptions field in the Ceph pool parameters of the KaaSCephCluster CR. The new field allows specifying custom RADOS Block Device (RBD) map options to use with StorageClass of a corresponding Ceph pool.

Ceph Manager modules configuration

Implemented the mgr.mgrModules parameter that includes the name and enabled keys to provide the capability to disable a particular Ceph Manager module. The mgr.modules parameter is now deprecated and, if specified, will be automatically transformed to mgr.mgrModules.

Ceph daemons health check configuration

Implemented the capability to configure health checks and liveness probe settings for Ceph daemons through the KaaSCephCluster CR.

Components versions

The following table lists the components versions of the Cluster release 11.4.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.5.4 0

Container runtime

Mirantis Container Runtime Updated

20.10.12 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.0.0-20220504194120

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-285-g8498abe0

lcm-ansible Updated

0.18.1

lcm-agent Updated

0.3.0-288-g405179c2

metallb-controller Updated

0.12.1

metrics-server

0.5.2

StackLight

Alerta

8.5.0-20211108051042

Alertmanager

0.23.0

Alertmanager Webhook ServiceNow Updated

0.1-20220706035316

Cerebro

0.9.3

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.14-20220111114545

Grafana Updated

9.0.2

Grafana Image Renderer

3.4.2

IAM Proxy

6.0.1

Metric Collector Updated

0.1-20220711134630

Metricbeat

7.10.2-20220309185937

OpenSearch

1-20220517112057

OpenSearch Dashboards

1-20220517112107

Prometheus

2.35.0

Prometheus Blackbox Exporter

0.19.0

Prometheus ES Exporter

0.14.0-20220517111946

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter Updated

0.1-20220624102731

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

sf-notifier Updated

0.3-20220706035002

sf-reporter Updated

0.1-20220622101204

Spilo

13-2.1p1-20220225091552

Telegraf Updated

1.9.1-20220714080809

1.20.2-20220204122426

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 11.4.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-908.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20220819101016

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook:v1.0.0-20220504194120


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.18.1/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-288-g405179c2/lcm-agent

Helm charts Updated

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.33.5.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.33.5.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.33.5.tgz

Docker images

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-285-g8498abe0

metallb-controller Updated

mirantis.azurecr.io/bm/external/metallb/controller:v0.12.1

metallb-speaker Updated

mirantis.azurecr.io/bm/external/metallb/speaker:v0.12.1

metrics-server

mirantis.azurecr.io/core/external/metrics-server:v0.5.2


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-9.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-37.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-131.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-154.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.13.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-6.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch Updated

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-52.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-40.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-42.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-228.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-8.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-2.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-3.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.8.1.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-6.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-6.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20220706035316

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.14-20220111114545

grafana Updated

mirantis.azurecr.io/stacklight/grafana:9.0.2

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.4.2

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.15.9

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20220711134630

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20220309185937

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

opensearch

mirantis.azurecr.io/stacklight/opensearch:1-20220517112057

opensearch-dashboards

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20220517112107

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1.12.0

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.35.0

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.19.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20220517111946

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20220624102731

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20220706035002

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20220622101204

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p1-20220225091552

telegraf Updated

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20220714080809

mirantis.azurecr.io/stacklight/telegraf:1.20.2-20220204122426

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:4.25.2


For the list of known and resolved issues, refer to the Container Cloud release 2.20.0 section.

11.3.0

This section outlines release notes for the Cluster release 11.3.0 that is introduced in the Mirantis Container Cloud release 2.19.0.

This Cluster release supports Mirantis Kubernetes Engine 3.5.3 with Kubernetes 1.21 and Mirantis Container Runtime 20.10.11.

Enhancements

This section outlines new features implemented in the Cluster release 11.3.0 that is introduced in the Container Cloud release 2.19.0.


Kubernetes Containers Grafana dashboard

Implemented a new Kubernetes Containers Grafana dashboard that provides resources consumption metrics of containers running on Kubernetes nodes.

Improvements to StackLight alerting

Enhanced the documentation by adding troubleshooting guidelines for the Kubernetes system, Metric Collector, Helm Controller, Release Controller, and MKE alerts.

Learn more

Troubleshoot alerts

Elasticsearch switch to OpenSearch

As part of the Elasticsearch switching to OpenSearch, replaced the Elasticsearch parameters with OpenSearch in the Container Cloud web UI.

Ceph cluster summary in Container Cloud web UI

Implemented the capability to easily view the summary and health status of all Ceph clusters through the Container Cloud web UI.

Ceph OSD removal or replacement by ID

Implemented the capability to remove or replace Ceph OSDs not only by the device name or path but also by ID, using the by-id parameter in the KaaSCephOperationRequest CR.

Learn more

Automated Ceph LCM

Multiple Ceph data pools per CephFS

TechPreview

Implemented the capability to create multiple Ceph data pools per a single CephFS installation using the dataPools parameter in the CephFS specification. The dataPool parameter is now deprecated.

Components versions

The following table lists the components versions of the Cluster release 11.3.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.5.3 0

Container runtime

Mirantis Container Runtime

20.10.11 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.0.0-20220504194120

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-257-ga93244da

lcm-ansible Updated

0.17.1-2-g1e337f8

lcm-agent Updated

0.3.0-257-ga93244da

metallb-controller

0.9.3-1

metrics-server

0.5.2

StackLight

Alerta

8.5.0-20211108051042

Alertmanager

0.23.0

Alertmanager Webhook ServiceNow

0.1-20220420161450

Cerebro

0.9.3

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.14-20220111114545

Grafana

8.5.0

Grafana Image Renderer Updated

3.4.2

IAM Proxy

6.0.1

Metric Collector Updated

0.1-20220614110617

Metricbeat

7.10.2-20220309185937

OpenSearch Updated

1-20220517112057

OpenSearch Dashboards Updated

1-20220517112107

Patroni

13-2.1p1-20220225091552

Prometheus Updated

2.35.0

Prometheus Blackbox Exporter

0.19.0

Prometheus ES Exporter Updated

0.14.0-20220517111946

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

sf-notifier Updated

0.3-20220514051554

sf-reporter

0.1-20220419092138

Telegraf

1.9.1-20210225142050

1.20.2-20220204122426

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 11.3.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-831.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20220715144333

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook:v1.0.0-20220504194120


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.17.1-2-g1e337f8/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-257-ga93244da/lcm-agent

Helm charts Updated

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.32.4.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.32.4.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.32.4.tgz

Docker images

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-257-ga93244da

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/core/external/metrics-server:v0.5.2


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-9.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-37.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-128.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-150.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.12.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-6.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-50.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-40.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-42.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-228.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-8.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-2.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-3.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.7.2.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-5.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-5.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20220420161450

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.14-20220111114545

grafana

mirantis.azurecr.io/stacklight/grafana:8.5.0

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.4.2

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.15.9

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20220614110617

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20220309185937

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:1-20220517112057

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20220517112107

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v2.35.0

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.19.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20220517111946

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20220514051554

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20220419092138

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p1-20220225091552

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225142050

mirantis.azurecr.io/stacklight/telegraf:1.20.2-20220204122426

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq Updated

mirantis.azurecr.io/stacklight/yq:4.25.2


For the list of known and resolved issues, refer to the Container Cloud release 2.19.0 section.

11.2.0

This section outlines release notes for the Cluster release 11.2.0 that is introduced in the Mirantis Container Cloud release 2.18.0.

This Cluster release supports Mirantis Kubernetes Engine 3.5.3 with Kubernetes 1.21 and Mirantis Container Runtime 20.10.11.

Enhancements

This section outlines new features implemented in the Cluster release 11.2.0 that is introduced in the Container Cloud release 2.18.0.


MKE and MCR version update

Updated the Mirantis Kubernetes Engine (MKE) version from 3.5.1 to 3.5.3 and the Mirantis Container Runtime (MCR) version from 20.10.8 to 20.10.11 for the Container Cloud management, regional, and managed clusters on all supported cloud providers, as well as for non Container Cloud based MKE cluster attachment.

Elasticsearch switch to OpenSearch

As part of the Elasticsearch switching to OpenSearch, removed the Elasticsearch and Kibana services, as well as introduced a set of new parameters that will replace the current ones in future releases. The old parameters are supported and take precedence over the new ones. For details, see Deprecation notes and StackLight configuration parameters.

Note

In the Container Cloud web UI, the Elasticsearch and Kibana naming is still present. However, the services behind them have switched to OpenSearch and OpenSearch Dashboards.

Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Added the MCCClusterUpdating informational alert that raises when the Mirantis Container Cloud cluster starts updating.

  • Enhanced StackLight alerting by clarifying alert severity levels. Switched all Minor alerts to Warning. Now, only alerts of the following severities exist: informational, warning, major, and critical.

  • Enhanced the documentation by adding troubleshooting guidelines for the Kubernetes applications, resources, and storage alerts.

Prometheus remote write

Implemented the capability to allow sending of metrics from Prometheus, using the Prometheus remote write feature to a custom monitoring endpoint.

StackLight mandatory parameters

Defined the following parameters in the StackLight configuration of the Cluster object for all types of clusters as mandatory. This applies to the clusters with StackLight enabled only. For existing clusters, Cluster object will be updated automatically.

Important

When creating a new cluster, specify these parameters through the Container Cloud web UI or as described in StackLight configuration parameters. Update all cluster templates created before Container Cloud 2.18.0 that do not have values for these parameters specified. Otherwise, the admission controller will reject cluster creation.

Web UI parameter

API parameter

Enable Logging

logging.enabled

HA Mode

highAvailabilityEnabled

Prometheus Persistent Volume Claim Size

prometheusServer.persistentVolumeClaimSize

Elasticsearch Persistent Volume Claim Size

elasticsearch.persistentVolumeClaimSize

Ceph daemons placement

Implemented the capability to configure the placement of the rook-ceph-operator, rook-discover, and csi-rbdplugin Ceph daemons.

Components versions

The following table lists the components versions of the Cluster release 11.2.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.5.3 0

Container runtime

Mirantis Container Runtime Updated

20.10.11 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.0.0-20220504194120

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-239-gae7218ea

lcm-ansible Updated

0.16.0-13-gcac49ca

lcm-agent Updated

0.3.0-239-gae7218ea

metallb-controller

0.9.3-1

metrics-server

0.5.2

StackLight

Alerta

8.5.0-20211108051042

Alertmanager

0.23.0

Alertmanager Webhook ServiceNow Updated

0.1-20220420161450

Cerebro

0.9.3

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.14-20220111114545

Grafana Updated

8.5.0

Grafana Image Renderer

3.2.1

IAM Proxy

6.0.1

Metric Collector

0.1-20220209123106

Metricbeat

7.10.2-20220309185937

OpenSearch

1-20220316161927

OpenSearch Dashboards

1-20220316161951

Patroni

13-2.1p1-20220225091552

Prometheus

2.31.1

Prometheus Blackbox Exporter

0.19.0

Prometheus ES Exporter

0.14.0-20220111114356

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

sf-notifier

0.3-20210930112115

sf-reporter Updated

0.1-20220419092138

Telegraf

1.9.1-20210225142050

1.20.2-20220204122426

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 11.2.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-792.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20220506180707

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook:v1.0.0-20220504194120


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.16.0-13-gcac49ca/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-239-gae7218ea/lcm-agent

Helm charts Updated

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.31.9.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.31.9.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.31.9.tgz

Docker images

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-239-gae7218ea

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.5.2


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow Updated

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch Removed

n/a

elasticsearch-curator Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-9.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-37.tgz

fluentd-elasticsearch Removed

n/a

fluentd-logs New

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-128.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-145.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.12.tgz

kibana Removed

n/a

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-6.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch New

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-50.tgz

opensearch-dashboards New

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-40.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-42.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-225.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams Updated

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-8.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-2.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-3.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.6.1.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-5.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-5.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20220420161450

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.14-20220111114545

grafana Updated

mirantis.azurecr.io/stacklight/grafana:8.5.0

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.2.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.15.9

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20220209123106

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20220309185937

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

opensearch

mirantis.azurecr.io/stacklight/opensearch:1-20220316161927

opensearch-dashboards

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20220316161951

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.31.1

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.19.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20220111114356

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210930112115

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20220419092138

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p1-20220225091552

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

mirantis.azurecr.io/stacklight/telegraf:1.20.2-20220204122426

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


For the list of known and resolved issues, refer to the Container Cloud release 2.18.0 section.

11.1.0

This section outlines release notes for the Cluster release 11.1.0 that is introduced in the Mirantis Container Cloud release 2.17.0.

This Cluster release supports Mirantis Kubernetes Engine 3.5.1 with Kubernetes 1.21 and Mirantis Container Runtime 20.10.8.

Enhancements

This section outlines new features implemented in the Cluster release 11.1.0 that is introduced in the Container Cloud release 2.17.0.


MKE 3.5.1 for management and regional clusters

Expanded support for the Mirantis Kubernetes Engine (MKE) 3.5.1 that includes Kubernetes 1.21 to be deployed on the Container Cloud management and regional clusters. The MKE 3.5.1 support for managed clusters was introduced in Container Cloud 2.16.0.

Elasticsearch retention time per index

Implemented the capability to configure the Elasticsearch retention time per logs, events, and notifications indices when creating a managed cluster through Container Cloud web UI.

The Retention Time parameter in the Container Cloud web UI is now replaced with the Logstash Retention Time, Events Retention Time, and Notifications Retention Time parameters.

Helm Controller monitoring

Implemented monitoring and added alerts for the Helm Controller service and the HelmBundle custom resources.

Configurable timeouts for Ceph requests

Implemented configurable timeouts for Ceph requests processing. The default is set to 30 minutes. You can configure the timeout using the pgRebalanceTimeoutMin parameter in the Ceph Helm chart.

Configurable replicas count for Ceph controllers

Implemented the capability to configure the replicas count for cephController, cephStatus, and cephRequest controllers using the replicas parameter in the Ceph Helm chart. The default is set to 3 replicas.

Ceph KaaSCephCluster controller

Implemented a separate ceph-kcc-controller that runs on a management cluster and manages the KaaSCephCluster custom resource (CR). Previously, the KaaSCephCluster CR was managed by bm-provider.

Learn more

Ceph overview

Components versions

The following table lists the components versions of the Cluster release 11.1.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.5.1 0

Container runtime

Mirantis Container Runtime

20.10.8 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook Updated

1.0.0-20220504194120

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-229-g4774bbbb

lcm-ansible Updated

0.15.0-24-gf023ea1

lcm-agent Updated

0.3.0-229-g4774bbbb

metallb-controller

0.9.3-1

metrics-server

0.5.2

StackLight

Alerta

8.5.0-20211108051042

Alertmanager

0.23.0

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.14-20220111114545

Grafana

8.2.7

Grafana Image Renderer

3.2.1

IAM Proxy

6.0.1

Metric Collector

0.1-20220209123106

Metricbeat Updated

7.10.2-20220309185937

OpenSearch Updated

1-20220316161927

OpenSearch Dashboards Updated

1-20220316161951

Patroni Updated

13-2.1p1-20220225091552

Prometheus

2.31.1

Prometheus Blackbox Exporter

0.19.0

Prometheus ES Exporter

0.14.0-20220111114356

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

sf-notifier

0.3-20210930112115

sf-reporter

0.1-20210607111404

Telegraf

1.9.1-20210225142050

Updated

1.20.2-20220204122426

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 11.1.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-719.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20220421152918

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook Updated

mirantis.azurecr.io/ceph/rook:v1.0.0-20220504194120


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.15.0-24-gf023ea1/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-229-g4774bbbb/lcm-agent

Helm charts Updated

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.30.6.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.30.6.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.30.6.tgz

Docker images

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-229-g4774bbbb

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.5.2


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-1.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-45.tgz

elasticsearch-curator Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-8.tgz

elasticsearch-exporter Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-36.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-123.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-130.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.12.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-36.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-4.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-42.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-218.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-1.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-1.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.5.3.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-4.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-4.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.14-20220111114545

grafana

mirantis.azurecr.io/stacklight/grafana:8.2.7

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.2.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20220209123106

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20220309185937

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:1-20220316161927

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20220316161951

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.31.1

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.19.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20220111114356

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210930112115

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20210607111404

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p1-20220225091552

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

Updated

mirantis.azurecr.io/stacklight/telegraf:1.20.2-20220204122426

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


For the list of known and resolved issues, refer to the Container Cloud release 2.17.0 section.

11.0.0

This section outlines release notes for the Cluster release 11.0.0 that is introduced in the Mirantis Container Cloud release 2.16.0 and is designed for managed clusters.

This Cluster release supports Mirantis Kubernetes Engine 3.5.1 with Kubernetes 1.21 and Mirantis Container Runtime 20.10.8.

For the list of known and resolved issues, refer to the Container Cloud release 2.16.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 11.0.0 that is introduced in the Container Cloud release 2.16.0.


MKE 3.5.1

Introduced support for the Mirantis Kubernetes Engine (MKE) 3.5.1 that includes Kubernetes 1.21 to be deployed on the Container Cloud managed clusters. Also, added support for attachment of existing MKE 3.5.1 clusters.

Improvements to StackLight alerting

Added the KubePodsRegularLongTermRestarts alert that raises in case of a long-term periodic restart of containers.

Elasticsearch retention time per index

Implemented the capability to configure the Elasticsearch retention time per index using the elasticsearch.retentionTime parameter in the StackLight Helm chart. Now, you can configure different retention periods for different indices: logs, events, and notifications.

The elasticsearch.logstashRetentionTime parameter is now deprecated.

Prometheus Blackbox Exporter configuration

Implemented the capability to configure Prometheus Blackbox Exporter, including customModules and timeoutOffset, through the StackLight Helm chart.

Custom Prometheus scrape configurations

Implemented the capability to define custom Prometheus scrape configurations.

Elasticsearch switch to OpenSearch

Due to licensing changes for Elasticsearch, Mirantis Container Cloud has switched from using Elasticsearch to OpenSearch and Kibana has switched to OpenSearch Dashboards. OpenSearch is a fork of Elasticsearch under the open-source Apache License with development led by Amazon Web Services.

For new deployments with the logging stack enabled, OpenSearch is now deployed by default. For existing deployments, migration to OpenSearch is performed automatically during clusters update. However, the entire Elasticsearch cluster may go down for up to 15 minutes.

Components versions

The following table lists the components versions of the Cluster release 11.0.0.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.5.1 0

Container runtime

Mirantis Container Runtime

20.10.8 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.7.6

LCM

Helm

2.16.11-40

helm-controller

0.3.0-187-gba894556

lcm-ansible

0.14.0-14-geb6a51f

lcm-agent

0.3.0-187-gba894556

metallb-controller

0.9.3-1

metrics-server

0.5.2

StackLight

Alerta

8.5.0-20211108051042

Alertmanager

0.23.0

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.14-20220111114545

Grafana

8.2.7

Grafana Image Renderer

3.2.1

IAM Proxy

6.0.1

Metric Collector

0.1-20220209123106

Metricbeat

7.10.2-20220111114624

OpenSearch

1.2-20220114131142

OpenSearch Dashboards

1.2-20220114131222

Patroni

13-2.1p1-20220131130853

Prometheus

2.31.1

Prometheus Blackbox Exporter

0.19.0

Prometheus ES Exporter

0.14.0-20220111114356

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

sf-notifier

0.3-20210930112115

sf-reporter

0.1-20210607111404

Telegraf

1.9.1-20210225142050

1.20.0-20210927090119

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 11.0.0.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-661.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20220203124822

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.7.6


LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.14.0-14-geb6a51f/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-187-gba894556/lcm-agent

Helm charts

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.29.6.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.29.6.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.29.6.tgz

Docker images

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-187-gba894556

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.5.2


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-1.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-44.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-36.tgz

fluentd-elasticsearch

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-120.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-125.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.12.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-36.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-4.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-38.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-218.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-1.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-1.tgz

stacklight

https://binary.mirantis.com/stacklight/helm/stacklight-0.4.3.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-4.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-4.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.14-20220111114545

grafana

mirantis.azurecr.io/stacklight/grafana:8.2.7

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.2.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20220209123106

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20220111114624

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

opensearch

mirantis.azurecr.io/stacklight/opensearch:1.2-20220114131142

opensearch-dashboards

mirantis.azurecr.io/stacklight/opensearch-dashboards:1.2-20220114131222

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.31.1

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.19.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20220111114356

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210930112115

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20210607111404

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p1-20220131130853

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

mirantis.azurecr.io/stacklight/telegraf:1.20.0-20210927090119

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


8.x series

This section outlines release notes for the unsupported Cluster releases of the 8.x series.

8.10.0

The Cluster release 8.10.0 is introduced in the Mirantis Container Cloud release 2.20.1. This Cluster release is based on the Cluster release 7.10.0.

The Cluster release 8.10.0 supports:

For the list of addressed and known issues, refer to the Container Cloud release 2.20.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 8.10.0 that is introduced in the Container Cloud release 2.20.1.

MKE and MCR version update

Updated the Mirantis Kubernetes Engine (MKE) version from 3.4.8 to 3.4.10 and the Mirantis Container Runtime (MCR) version from 20.10.11 to 20.10.12.

Creation of Ceph RADOS Gateway users

Implemented the objectUsers RADOS Gateway parameter in the KaaSCephCluster CR. The new parameter allows for an easy creation of custom Ceph RADOS Gateway users with permission rules. The users parameter is now deprecated and, if specified, will be automatically transformed to objectUsers.

Ceph cluster summary in Container Cloud web UI

Implemented the capability to easily view the summary and health status of all Ceph clusters through the Container Cloud web UI.

Ceph OSD removal or replacement by ID

Implemented the capability to remove or replace Ceph OSDs not only by the device name or path but also by ID, using the by-id parameter in the KaaSCephOperationRequest CR.

Learn more

Automated Ceph LCM

Kubernetes Containers Grafana dashboard

Implemented a new Kubernetes Containers Grafana dashboard that provides resources consumption metrics of containers running on Kubernetes nodes.

Components versions

The following table lists the components versions of the Cluster release 8.10.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.4.10 0

Container runtime

Mirantis Container Runtime Updated

20.10.12 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.0.0-20220504194120

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-285-g8498abe0

lcm-ansible Updated

0.18.1

lcm-agent Updated

0.3.0-288-g405179c2

metallb-controller Updated

0.12.1

metrics-server

0.5.2

StackLight

Alerta

8.5.0-20211108051042

Alertmanager

0.23.0

Alertmanager Webhook ServiceNow Updated

0.1-20220706035316

Cerebro

0.9.3

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.14-20220111114545

Grafana Updated

9.0.2

Grafana Image Renderer Updated

3.4.2

IAM Proxy

6.0.1

Metric Collector Updated

0.1-20220711134630

Metricbeat

7.10.2-20220309185937

OpenSearch Updated

1-20220517112057

OpenSearch Dashboards Updated

1-20220517112107

Prometheus Updated

2.35.0

Prometheus Blackbox Exporter

0.19.0

Prometheus ES Exporter Updated

0.14.0-20220517111946

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter Updated

0.1-20220624102731

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

sf-notifier Updated

0.3-20220706035002

sf-reporter Updated

0.1-20220622101204

Spilo

13-2.1p1-20220225091552

Telegraf Updated

1.9.1-20220714080809

1.20.2-20220204122426

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 8.10.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-908.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20220819101016

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook:v1.0.0-20220504194120


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.18.1/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-288-g405179c2/lcm-agent

Helm charts Updated

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.33.5.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.33.5.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.33.5.tgz

Docker images

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-285-g8498abe0

metallb-controller Updated

mirantis.azurecr.io/bm/external/metallb/controller:v0.12.1

metallb-speaker Updated

mirantis.azurecr.io/bm/external/metallb/speaker:v0.12.1

metrics-server

mirantis.azurecr.io/core/external/metrics-server:v0.5.2


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-9.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-37.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-131.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-154.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.13.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-6.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch Updated

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-52.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-40.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-42.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-228.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-8.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-2.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-3.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.8.1.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-6.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-6.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20220706035316

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.14-20220111114545

grafana Updated

mirantis.azurecr.io/stacklight/grafana:9.0.2

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.4.2

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.15.9

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20220711134630

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20220309185937

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:1-20220517112057

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20220517112107

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1.12.0

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v2.35.0

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.19.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20220517111946

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20220624102731

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20220706035002

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20220622101204

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p1-20220225091552

telegraf Updated

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20220714080809

mirantis.azurecr.io/stacklight/telegraf:1.20.2-20220204122426

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq Updated

mirantis.azurecr.io/stacklight/yq:4.25.2


8.8.0

The Cluster release 8.8.0 is introduced in the Mirantis Container Cloud release 2.18.1. This Cluster release is based on the Cluster release 7.8.0.

The Cluster release 8.8.0 supports:

For the list of addressed and known issues, refer to the Container Cloud release 2.18.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 8.8.0 that is introduced in the Container Cloud release 2.18.1.


MKE and MCR version update

Updated the Mirantis Kubernetes Engine (MKE) version from 3.4.7 to 3.4.8 and the Mirantis Container Runtime (MCR) version from 20.10.8 to 20.10.11.

Elasticsearch switch to OpenSearch

As part of the Elasticsearch switching to OpenSearch, removed the Elasticsearch and Kibana services, as well as introduced a set of new parameters that will replace the current ones in future releases. The old parameters are supported and take precedence over the new ones. For details, see Deprecation notes and StackLight configuration parameters.

Note

In the Container Cloud web UI, the Elasticsearch and Kibana naming is still present. However, the services behind them have switched to OpenSearch and OpenSearch Dashboards.

Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Added the MCCClusterUpdating informational alert that raises when the Mirantis Container Cloud cluster starts updating.

  • Enhanced StackLight alerting by clarifying alert severity levels. Switched all Minor alerts to Warning. Now, only alerts of the following severities exist: informational, warning, major, and critical.

  • Enhanced the documentation by adding troubleshooting guidelines for the Kubernetes applications, resources, and storage alerts.

Prometheus remote write

Implemented the capability to allow sending of metrics from Prometheus, using the Prometheus remote write feature to a custom monitoring endpoint.

StackLight mandatory parameters

Defined the following parameters in the StackLight configuration of the Cluster object for all types of clusters as mandatory. This applies to the clusters with StackLight enabled only. For existing clusters, Cluster object will be updated automatically.

Important

When creating a new cluster, specify these parameters through the Container Cloud web UI or as described in StackLight configuration parameters. Update all cluster templates created before Container Cloud 2.18.0 that do not have values for these parameters specified. Otherwise, the Admission Controller will reject cluster creation.

Web UI parameter

API parameter

Enable Logging

logging.enabled

HA Mode

highAvailabilityEnabled

Prometheus Persistent Volume Claim Size

prometheusServer.persistentVolumeClaimSize

Elasticsearch Persistent Volume Claim Size

elasticsearch.persistentVolumeClaimSize

Elasticsearch retention time per index

Implemented the capability to configure the Elasticsearch retention time per logs, events, and notifications indices when creating a managed cluster through Container Cloud web UI.

The Retention Time parameter in the Container Cloud web UI is now replaced with the Logstash Retention Time, Events Retention Time, and Notifications Retention Time parameters.

Helm Controller monitoring

Implemented monitoring and added alerts for the Helm Controller service and the HelmBundle custom resources.

Ceph daemons placement

Implemented the capability to configure the placement of the rook-ceph-operator, rook-discover, and csi-rbdplugin Ceph daemons.

Configurable timeouts for Ceph requests

Implemented configurable timeouts for Ceph requests processing. The default is set to 30 minutes. You can configure the timeout using the pgRebalanceTimeoutMin parameter in the Ceph Helm chart.

Configurable replicas count for Ceph controllers

Implemented the capability to configure the replicas count for cephController, cephStatus, and cephRequest controllers using the replicas parameter in the Ceph Helm chart. The default is set to 3 replicas.

Ceph KaaSCephCluster Controller

Implemented a separate ceph-kcc-controller that runs on a management cluster and manages the KaaSCephCluster custom resource (CR). Previously, the KaaSCephCluster CR was managed by bm-provider.

Learn more

Ceph overview

Components versions

The following table lists the components versions of the Cluster release 8.8.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.4.8 0

Container runtime

Mirantis Container Runtime Updated

20.10.11 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook Updated

1.0.0-20220504194120

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-239-gae7218ea

lcm-ansible Updated

0.16.0-13-gcac49ca

lcm-agent Updated

0.3.0-239-gae7218ea

metallb-controller

0.9.3-1

metrics-server Updated

0.5.2

StackLight

Alerta

8.5.0-20211108051042

Alertmanager

0.23.0

Alertmanager Webhook ServiceNow Updated

0.1-20220420161450

Cerebro

0.9.3

Elasticsearch curator

5.7.6

Elasticsearch exporter

1.0.2

Fluentd

1.14-20220111114545

Grafana Updated

8.5.0

Grafana Image Renderer

3.2.1

IAM Proxy

6.0.1

Metric Collector

0.1-20220209123106

Metricbeat Updated

7.10.2-20220309185937

OpenSearch Updated

1-20220316161927

OpenSearch Dashboards Updated

1-20220316161951

Patroni Updated

13-2.1p1-20220225091552

Prometheus

2.31.1

Prometheus Blackbox Exporter

0.19.0

Prometheus ES Exporter

0.14.0-20220111114356

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

sf-notifier

0.3-20210930112115

sf-reporter Updated

0.1-20220419092138

Telegraf

1.9.1-20210225142050

1.20.2-20220204122426 Updated

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 8.8.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-792.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20220506180707

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook Updated

mirantis.azurecr.io/ceph/rook:v1.0.0-20220504194120


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.16.0-13-gcac49ca/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-239-gae7218ea/lcm-agent

Helm charts Updated

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.31.9.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.31.9.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.31.9.tgz

Docker images

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-239-gae7218ea

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.5.2


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow Updated

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch Removed

n/a

elasticsearch-curator Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-9.tgz

elasticsearch-exporter Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-37.tgz

fluentd-elasticsearch Removed

n/a

fluentd-logs New

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-128.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-145.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.12.tgz

kibana Removed

n/a

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-6.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch New

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-50.tgz

opensearch-dashboards New

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-40.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-42.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-225.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams Updated

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-8.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-2.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-3.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.6.1.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-5.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-5.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20220420161450

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.14-20220111114545

grafana Updated

mirantis.azurecr.io/stacklight/grafana:8.5.0

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.2.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.15.9

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20220209123106

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20220309185937

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:1-20220316161927

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20220316161951

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1.12.0

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.31.1

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.19.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20220111114356

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210930112115

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20220419092138

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p1-20220225091552

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

Updated

mirantis.azurecr.io/stacklight/telegraf:1.20.2-20220204122426

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


8.6.0

The Cluster release 8.6.0 is introduced in the Mirantis Container Cloud release 2.16.1. This Cluster release is based on the Cluster release 7.6.0.

The Cluster release 8.6.0 supports:

For the list of addressed and known issues, refer to the Container Cloud release 2.16.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 8.6.0 that is introduced in the Container Cloud release 2.16.1.


MKE version update from 3.4.6 to 3.4.7

Updated the Mirantis Kubernetes Engine (MKE) major version from 3.4.6 to 3.4.7 for the Container Cloud management, regional, and managed clusters. Also, added support for attachment of existing MKE 3.4.7 clusters.

Improvements to StackLight alerting

Added the KubePodsRegularLongTermRestarts alert that raises in case of a long-term periodic restart of containers.

Elasticsearch retention time per index

Implemented the capability to configure the Elasticsearch retention time per index using the elasticsearch.retentionTime parameter in the StackLight Helm chart. Now, you can configure different retention periods for different indices: logs, events, and notifications.

The elasticsearch.logstashRetentionTime parameter is now deprecated.

Prometheus Blackbox Exporter configuration

Implemented the capability to configure Prometheus Blackbox Exporter, including customModules and timeoutOffset, through the StackLight Helm chart.

Custom Prometheus scrape configurations

Implemented the capability to define custom Prometheus scrape configurations.

Elasticsearch switch to OpenSearch

Due to licensing changes for Elasticsearch, Mirantis Container Cloud has switched from using Elasticsearch to OpenSearch and Kibana has switched to OpenSearch Dashboards. OpenSearch is a fork of Elasticsearch under the open-source Apache License with development led by Amazon Web Services.

For new deployments with the logging stack enabled, OpenSearch is now deployed by default. For existing deployments, migration to OpenSearch is performed automatically during clusters update. However, the entire Elasticsearch cluster may go down for up to 15 minutes.

Components versions

The following table lists the components versions of the Cluster release 8.6.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.4.7 0

Container runtime

Mirantis Container Runtime

20.10.8 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.7.6

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-187-gba894556

lcm-ansible Updated

0.14.0-14-geb6a51f

lcm-agent Updated

0.3.0-187-gba894556

metallb-controller

0.9.3-1

metrics-server Updated

0.5.2

StackLight

Alerta

8.5.0-20211108051042

Alertmanager

0.23.0

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch Removed

n/a

Elasticsearch curator

5.7.6

Elasticsearch exporter

1.0.2

Fluentd Updated

1.14-20220111114545

Grafana

8.2.7

Grafana Image Renderer

3.2.1

IAM Proxy

6.0.1

Kibana Removed

n/a

Metric Collector Updated

0.1-20220209123106

Metricbeat Updated

7.10.2-20220111114624

OpenSearch New

1.2-20220114131142

OpenSearch Dashboards New

1.2-20220114131222

Patroni Updated

13-2.1p1-20220131130853

Prometheus

2.31.1

Prometheus Blackbox Exporter Updated

0.19.0

Prometheus ES Exporter Updated

0.14.0-20220111114356

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

sf-notifier

0.3-20210930112115

sf-reporter

0.1-20210607111404

Telegraf

1.9.1-20210225142050

1.20.0-20210927090119

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 8.6.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-661.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20220303130346

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.7.6


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.14.0-14-geb6a51f/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-187-gba894556/lcm-agent

Helm charts Updated

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.29.6.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.29.6.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.29.6.tgz

Docker images

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-187-gba894556

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server Updated

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.5.2


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-1.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-44.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-36.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-120.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-125.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.12.tgz

kibana Updated

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-36.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-4.tgz

metricbeat Updated

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-38.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-218.tgz

prometheus-blackbox-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-1.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-1.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.4.3.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-4.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-4.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch Removed

n/a

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.14-20220111114545

grafana

mirantis.azurecr.io/stacklight/grafana:8.2.7

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.2.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana Removed

n/a

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20220209123106

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20220111114624

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

opensearch New

mirantis.azurecr.io/stacklight/opensearch:1.2-20220114131142

opensearch-dashboards New

mirantis.azurecr.io/stacklight/opensearch-dashboards:1.2-20220114131222

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.31.1

prometheus-blackbox-exporter Updated

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.19.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20220111114356

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210930112115

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20210607111404

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p1-20220131130853

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

mirantis.azurecr.io/stacklight/telegraf:1.20.0-20210927090119

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


8.5.0

The Cluster release 8.5.0 is introduced in the Mirantis Container Cloud release 2.15.1. This Cluster release is based on the Cluster release 7.5.0.

The Cluster release 8.5.0 supports:

For the list of addressed and known issues, refer to the Container Cloud release 2.15.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 8.5.0 that is introduced in the Container Cloud release 2.15.1.


MOSK on local RAID devices

Available since 2.16.0 Technology Preview

Implemented the initial Technology Preview support for Mirantis OpenStack for Kubernetes (MOSK) deployment on local software-based Redundant Array of Independent Disks (RAID) devices to withstand failure of one device at a time. The feature becomes available once your Container Cloud cluster is automatically upgraded to 2.16.0.

Using a custom bare metal host profile, you can configure and create an mdadm-based software RAID device of type raid10 if you have an even number of devices available on your servers. At least four storage devices are required for such RAID device.

MKE and Kubernetes major versions update

Introduced support for the Mirantis Kubernetes Engine version 3.4.6 with Kubernetes 1.20 for the Container Cloud management, regional, and managed clusters. Also, added support for attachment of existing MKE 3.4.6 clusters.

MCR version update

Updated the Mirantis Container Runtime (MCR) version from 20.10.6 to 20.10.8 for the Container Cloud management, regional, and managed clusters on all supported cloud providers.

Network interfaces monitoring

Limited the number of monitored network interfaces to prevent extended Prometheus RAM consumption in big clusters. By default, Prometheus Node Exporter now only collects information of a basic set of interfaces, both host and container. If required you can edit the list of excluded devices as needed.

Custom Prometheus recording rules

Implemented the capability to define custom Prometheus recording rules through the prometheusServer.customRecordingRules parameter in the StackLight Helm chart. Overriding of existing recording rules is not supported.

Syslog packet size configuration

Implemented the capability to configure packet size for the syslog logging output. If remote logging to syslog is enabled in StackLight, use the logging.syslog.packetSize parameter in the StackLight Helm chart to configure the packet size.

Prometheus Relay configuration

Implemented the capability to configure the Prometheus Relay client timeout and response size limit through the prometheusRelay.clientTimeout and prometheusRelay.responseLimitBytes parameters in the StackLight Helm chart.

Mirantis Container Cloud alerts

Implemented the MCCLicenseExpirationCritical and MCCLicenseExpirationMajor alerts that notify about Mirantis Container Cloud license expiration in less than 10 and 30 days.

Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Enhanced Kubernetes applications alerting:

    • Reworked the Kubernetes applications alerts to minimize flapping, avoid firing during pod rescheduling, and to detect crash looping for pods that restart less frequently.

    • Added the KubeDeploymentOutage, KubeStatefulSetOutage, and KubeDaemonSetOutage alerts.

    • Removed the redundant KubeJobCompletion alert.

    • Enhanced the alert inhibition rules to reduce alert flooding.

    • Improved alert descriptions.

  • Split TelemeterClientFederationFailed into TelemeterClientFailed and TelemeterClientHAFailed to separate alerts depending on the HA mode disabled or enabled.

  • Updated the description for DockerSwarmNodeFlapping.

Node Exporter collectors

Disabled unused Node Exporter collectors and implemented the capability to manually enable needed collectors using the nodeExporter.extraCollectorsEnabled parameter. Only the following collectors are now enabled by default in StackLight:

  • arp

  • conntrack

  • cpu

  • diskstats

  • entropy

  • filefd

  • filesystem

  • hwmon

  • loadavg

  • meminfo

  • netdev

  • netstat

  • nfs

  • stat

  • sockstat

  • textfile

  • time

  • timex

  • uname

  • vmstat

Enhanced Ceph architecture

To improve debugging and log reading, separated Ceph Controller, Ceph Status Controller, and Ceph Request Controller, which used to run in one pod, into three different deployments.

Ceph networks validation

Implemented additional validation of networks specified in spec.cephClusterSpec.network.publicNet and spec.cephClusterSpec.network.clusterNet and prohibited the use of the 0.0.0.0/0 CIDR. Now, the bare metal provider automatically translates the 0.0.0.0/0 network range to the default LCM IPAM subnet if it exists.

You can now also add corresponding labels for the bare metal IPAM subnets when configuring the Ceph cluster during the management cluster deployment.

Automated Ceph LCM

Implemented full support for automated Ceph LCM operations using the KaaSCephOperationRequest CR, such as addition or removal of Ceph OSDs and nodes, as well as replacement of failed Ceph OSDs or nodes.

Learn more

Automated Ceph LCM

Ceph CSI provisioner tolerations and node affinity

Implemented the capability to specify Container Storage Interface (CSI) provisioner tolerations and node affinity for different Rook resources. Added support for the all and mds keys in toleration rules.

Ceph KaaSCephCluster.status enhancement

Extended the fullClusterInfo section of the KaaSCephCluster.status resource with the following fields:

  • cephDetails - contains verbose details of a Ceph cluster state

  • cephCSIPluginDaemonsStatus - contains details on all Ceph CSIs

Ceph Shared File System (CephFS)

TechPreview

Implemented the capability to enable the Ceph Shared File System, or CephFS, to create read/write shared file system Persistent Volumes (PVs).

Components versions

The following table lists the components versions of the Cluster release 8.5.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.4.6 0

Container runtime

Mirantis Container Runtime Updated

20.10.8 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook Updated

1.7.6

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-132-g83a348fa

lcm-ansible Updated

0.13.0-27-gcb6022b

lcm-agent Updated

0.3.0-132-g83a348fa

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta Updated

8.5.0-20211108051042

Alertmanager Updated

0.23.0

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch Updated

7.10.2-2021110210112

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20210915110132

Grafana Updated

8.2.7

Grafana Image Renderer Updated

3.2.1

IAM Proxy

6.0.1

Kibana Updated

7.10.2-20211101074638

Metric Collector Updated

0.1-20211109121134

Metricbeat Updated

7.10.2-20211103140113

Patroni

13-2.0p6-20210525081943

Prometheus Updated

2.31.1

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.14.0-20210812120726

Prometheus MS Teams

1.4.2

Prometheus Node Exporter Updated

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

sf-notifier

0.3-20210930112115

sf-reporter

0.1-20210607111404

Telegraf

1.9.1-20210225142050

1.20.0-20210927090119

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 8.5.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-606.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20220204145523

cephcsi Updated

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook Updated

mirantis.azurecr.io/ceph/rook/ceph:v1.7.6


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.13.0-27-gcb6022b/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-132-g83a348fa/lcm-agent

Helm charts Updated

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.28.7.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.28.7.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.28.7.tgz

Docker images

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-132-g83a348fa

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta Updated

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow Updated

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-1.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-37.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-32.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-115.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-121.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.10.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-30.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-3.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-12.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-36.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-214.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-7.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-1.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-1.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.3.1.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-1.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-1.tgz

Docker images

alerta Updated

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch Updated

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20211102101126

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210915110132

grafana Updated

mirantis.azurecr.io/stacklight/grafana:8.2.7

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.2.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana Updated

mirantis.azurecr.io/stacklight/kibana:7.10.2-20211101074638

kube-state-metrics Updated

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20211109121134

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20211103140113

node-exporter Updated

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v2.31.1

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20210812120726

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210930112115

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20210607111404

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

mirantis.azurecr.io/stacklight/telegraf:1.20.0-20210927090119

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


7.x series

This section outlines release notes for the unsupported Cluster releases of the 7.x series.

7.11.0

This section outlines release notes for the Cluster release 7.11.0 that is introduced in the Mirantis Container Cloud release 2.21.0 and is the last release in the 7.x series.

This Cluster release supports Mirantis Kubernetes Engine 3.4.11 with Kubernetes 1.20 and Mirantis Container Runtime 20.10.13.

For the list of known and resolved issues, refer to the Container Cloud release 2.21.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 7.11.0 that is introduced in the Container Cloud release 2.21.0.

MKE and MCR patch release update

Updated the Mirantis Kubernetes Engine (MKE) version from 3.4.10 to 3.4.11 and the Mirantis Container Runtime (MCR) version from 20.10.12 to 20.10.13 for the Container Cloud management, regional, and managed clusters on all supported cloud providers, as well as for non Container Cloud based MKE cluster attachment.

MetalLB minor version update

Updated the MetalLB version from 0.12.1 to 0.13.4 for the Container Cloud management, regional, and managed clusters of all cloud providers that use MetalLB: bare metal, Equinix Metal with public and private networking, vSphere.

The MetalLB configuration is now stored in dedicated MetalLB objects instead of the ConfigMap object.

Enhanced etcd monitoring

Improved etcd monitoring by implementing the Etcd dashboard and etcdDbSizeCritical and etcdDbSizeMajor alerts that inform about the size of the etcd database.

Reference Application for workload monitoring

Implemented Reference Application that is a small microservice application that enables workload monitoring on non-MOSK managed clusters. It mimics a classical microservice application and provides metrics that describe the likely behavior of user workloads.

Reference Application contains a set of alerts and a separate Grafana dashboard to provide check statuses of Reference Application and statistics such as response time and content length.

The feature is disabled by default and can be enabled using the StackLight configuration manifest.

Ceph secrets specification in the Ceph cluster status

Added the miraCephSecretsInfo specification to KaaSCephCluster.status. This specification contains current state and details of secrets that are used in the Ceph cluster, such as keyrings, Ceph clients, RADOS Gateway user credentials, and so on.

Using miraCephSecretsInfo, you can create, access, and remove Ceph RADOS Block Device (RBD) or Ceph File System (CephFS) clients and RADOS Gateway (RGW) users.

Amazon S3 bucket policies for Ceph Object Storage users

Implemented the ability to create and configure Amazon S3 bucket policies between Ceph Object Storage users.

Components versions

The following table lists the components versions of the Cluster release 7.11.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration Updated 0

Mirantis Kubernetes Engine

3.4.11 1

Container runtime Updated 0

Mirantis Container Runtime

20.10.13 2

Distributed storage Updated

Ceph

15.2.17 (Octopus)

Rook

1.0.0-20220809220209

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-327-gbc30b11b

lcm-ansible Updated

0.19.0-12-g6cad672

lcm-agent Updated

0.3.0-327-gbc30b11b

metallb-controller Updated

0.13.4 3

metrics-server

0.5.2

StackLight

Alerta Updated

8.5.0-20220923121625

Alertmanager

0.23.0

Alertmanager Webhook ServiceNow

0.1-20220706035316

Cerebro Updated

0.9-20220923122026

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd Updated

1.14-20220922214003

Grafana

9.0.2

Grafana Image Renderer Updated

3.5.0

IAM Proxy

6.0.1

Metric Collector

0.1-20220711134630

Metricbeat Updated

7.10.2-20220909091002

OpenSearch

1-20220517112057

OpenSearch Dashboards

1-20220517112107

Prometheus

2.35.0

Prometheus Blackbox Exporter

0.19.0

Prometheus ES Exporter

0.14.0-20220517111946

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20220624102731

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

Reference Application New

0.0.1

sf-notifier

0.3-20220706035002

sf-reporter Updated

0.1-20220916113234

Spilo Updated

13-2.1p1-20220921105803

Telegraf

1.9.1-20220714080809

1.23.4-20220915114529 Updated

Telemeter

4.4.0-20200424

0(1,2)

For MOSK-based deployments, MKE will be updated from 3.4.10 to 3.4.11 and MCR will be updated from 20.10.12 to 20.10.13 in one of the following Container Cloud releases.

1

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

2

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

3

For MOSK-based deployments, the metallb-controller version is updated from 0.12.1 to 0.13.4 in MOSK 22.5.

Artifacts

This section lists the components artifacts of the Cluster release 7.11.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-964.tgz

Docker images

ceph Updated

mirantis.azurecr.io/ceph/ceph:v15.2.17

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20221024145202

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook Updated

mirantis.azurecr.io/ceph/rook:v1.0.0-20220809220209


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.19.0-12-g6cad672/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-327-gbc30b11b/lcm-agent

Helm charts Updated

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.34.16.tgz

metallb 0

https://binary.mirantis.com/core/helm/metallb-1.34.16.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.34.16.tgz

Docker images

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-327-gbc30b11b

metallb-controller Updated 0

mirantis.azurecr.io/bm/external/metallb/controller:v0.13.4

metallb-speaker Updated 0

mirantis.azurecr.io/bm/external/metallb/speaker:v0.13.4

metrics-server

mirantis.azurecr.io/core/external/metrics-server:v0.5.2

0(1,2,3)

For MOSK-based deployments, the metallb version is updated from 0.12.1 to 0.13.4 in MOSK 22.5.


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow Updated

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-4.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-10.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-37.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-logs-0.1.0-mcp-142.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-173.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.13.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-10.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-52.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-40.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-42.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-229.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams Updated

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-9.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

refapp New

https://binary.mirantis.com/scale/helm/refapp-0.1.1-mcp-1.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-4.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-5.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.9.2.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-7.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-7.tgz

Docker images

alerta-web Updated

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20220923121625

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20220706035316

blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.19.0

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro Updated

mirantis.azurecr.io/stacklight/cerebro:v0.9-20220923122026

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curator

mirantis.azurecr.io/stacklight/curator:5.7.6

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch_exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.14-20220922214003

grafana

mirantis.azurecr.io/stacklight/grafana:9.0.2

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.5.0

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.15.9

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.22.13

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20220711134630

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20220909091002

nginx-prometheus-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

opensearch

mirantis.azurecr.io/stacklight/opensearch:1-20220517112057

opensearch-dashboards

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20220517112107

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1.12.0

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.35.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20220517111946

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20220624102731

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

refapp New

mirantis.azurecr.io/openstack/openstack-refapp:0.0.1.dev29

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20220706035002

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20220916113234

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p1-20220921105803

stacklight-toolkit New

mirantis.azurecr.io/stacklight/stacklight-toolkit:20220729121446

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20220714080809

mirantis.azurecr.io/stacklight/telegraf:1.23.4-20220915114529 Updated

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:4.25.2


1

Only for bare metal and Equinix Metal with private networking

2

Only for existing bare metal clusters

7.10.0

This section outlines release notes for the Cluster release 7.10.0 that is introduced in the Mirantis Container Cloud release 2.20.0.

This Cluster release supports Mirantis Kubernetes Engine 3.4.10 with Kubernetes 1.20 and Mirantis Container Runtime 20.10.12.

For the list of known and resolved issues, refer to the Container Cloud release 2.20.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 7.10.0 that is introduced in the Container Cloud release 2.20.0.

MKE and MCR version update

Updated the Mirantis Kubernetes Engine (MKE) version from 3.4.9 to 3.4.10 and the Mirantis Container Runtime (MCR) version from 20.10.11 to 20.10.12 for the Container Cloud management, regional, and managed clusters on all supported cloud providers except MOSK-based deployments, as well as for non Container Cloud based MKE cluster attachment.

Ceph removal from management and regional clusters

To reduce resource consumption, removed Ceph cluster deployment from management and regional clusters based on bare metal and Equinix Metal with private networking. Ceph is automatically removed during the Cluster release update to 7.10.0. Managed clusters continue using Ceph as a distributed storage system.

Creation of Ceph RADOS Gateway users

Implemented the objectUsers RADOS Gateway parameter in the KaaSCephCluster CR. The new parameter allows for an easy creation of custom Ceph RADOS Gateway users with permission rules. The users parameter is now deprecated and, if specified, will be automatically transformed to objectUsers.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

Custom RBD map options

Implemented the rbdDeviceMapOptions field in the Ceph pool parameters of the KaaSCephCluster CR. The new field allows specifying custom RADOS Block Device (RBD) map options to use with StorageClass of a corresponding Ceph pool.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

Ceph Manager modules configuration

Implemented the mgr.mgrModules parameter that includes the name and enabled keys to provide the capability to disable a particular Ceph Manager module. The mgr.modules parameter is now deprecated and, if specified, will be automatically transformed to mgr.mgrModules.

Ceph daemons health check configuration

Implemented the capability to configure health checks and liveness probe settings for Ceph daemons through the KaaSCephCluster CR.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

Components versions

The following table lists the components versions of the Cluster release 7.10.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.4.10 0

Container runtime

Mirantis Container Runtime Updated

20.10.12 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.0.0-20220504194120

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-285-g8498abe0

lcm-ansible Updated

0.18.1

lcm-agent Updated

0.3.0-288-g405179c2

metallb-controller Updated

0.12.1

metrics-server

0.5.2

StackLight

Alerta

8.5.0-20211108051042

Alertmanager

0.23.0

Alertmanager Webhook ServiceNow Updated

0.1-20220706035316

Cerebro

0.9.3

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.14-20220111114545

Grafana Updated

9.0.2

Grafana Image Renderer

3.4.2

IAM Proxy

6.0.1

Metric Collector Updated

0.1-20220711134630

Metricbeat

7.10.2-20220309185937

OpenSearch

1-20220517112057

OpenSearch Dashboards

1-20220517112107

Prometheus

2.35.0

Prometheus Blackbox Exporter

0.19.0

Prometheus ES Exporter

0.14.0-20220517111946

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter Updated

0.1-20220624102731

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

sf-notifier Updated

0.3-20220706035002

sf-reporter Updated

0.1-20220622101204

Spilo

13-2.1p1-20220225091552

Telegraf Updated

1.9.1-20220714080809

1.20.2-20220204122426

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 7.10.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-908.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20220819101016

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook:v1.0.0-20220504194120


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.18.1/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-288-g405179c2/lcm-agent

Helm charts Updated

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.33.5.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.33.5.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.33.5.tgz

Docker images

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-285-g8498abe0

metallb-controller Updated

mirantis.azurecr.io/bm/external/metallb/controller:v0.12.1

metallb-speaker Updated

mirantis.azurecr.io/bm/external/metallb/speaker:v0.12.1

metrics-server

mirantis.azurecr.io/core/external/metrics-server:v0.5.2


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-9.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-37.tgz

fluentd-logs Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-131.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-154.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.13.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-6.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch Updated

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-52.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-40.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-42.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-228.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-8.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-2.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-3.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.8.1.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-6.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-6.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20220706035316

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.14-20220111114545

grafana Updated

mirantis.azurecr.io/stacklight/grafana:9.0.2

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.4.2

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.15.9

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20220711134630

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20220309185937

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

opensearch

mirantis.azurecr.io/stacklight/opensearch:1-20220517112057

opensearch-dashboards

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20220517112107

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

pgbouncer

mirantis.azurecr.io/stacklight/pgbouncer:1.12.0

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.35.0

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.19.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20220517111946

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20220624102731

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20220706035002

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20220622101204

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p1-20220225091552

telegraf Updated

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20220714080809

mirantis.azurecr.io/stacklight/telegraf:1.20.2-20220204122426

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:4.25.2


7.9.0

This section outlines release notes for the Cluster release 7.9.0 that is introduced in the Mirantis Container Cloud release 2.19.0.

This Cluster release supports Mirantis Kubernetes Engine 3.4.9 with Kubernetes 1.20 and Mirantis Container Runtime 20.10.11.

For the list of known and resolved issues, refer to the Container Cloud release 2.19.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 7.9.0 that is introduced in the Container Cloud release 2.19.0.


MKE version update

Updated the Mirantis Kubernetes Engine (MKE) version from 3.4.8 to 3.4.9 for the Container Cloud management, regional, and managed clusters on all supported cloud providers except MOSK-based deployments, as well as for non Container Cloud based MKE cluster attachment.

Kubernetes Containers Grafana dashboard

Implemented a new Kubernetes Containers Grafana dashboard that provides resources consumption metrics of containers running on Kubernetes nodes.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

Improvements to StackLight alerting

Enhanced the documentation by adding troubleshooting guidelines for the Kubernetes system, Metric Collector, Helm Controller, Release Controller, and MKE alerts.

Learn more

Troubleshoot alerts

Elasticsearch switch to OpenSearch

As part of the Elasticsearch switching to OpenSearch, replaced the Elasticsearch parameters with OpenSearch in the Container Cloud web UI.

Ceph cluster summary in Container Cloud web UI

Implemented the capability to easily view the summary and health status of all Ceph clusters through the Container Cloud web UI. The feature is supported for the bare metal provider only.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

Ceph OSD removal or replacement by ID

Implemented the capability to remove or replace Ceph OSDs not only by the device name or path but also by ID, using the by-id parameter in the KaaSCephOperationRequest CR.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

Learn more

Automated Ceph LCM

Multiple Ceph data pools per CephFS

TechPreview

Implemented the capability to create multiple Ceph data pools per a single CephFS installation using the dataPools parameter in the CephFS specification. The dataPool parameter is now deprecated.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature is not supported yet.

Components versions

The following table lists the components versions of the Cluster release 7.9.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.4.9 0

Container runtime

Mirantis Container Runtime

20.10.11 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.0.0-20220504194120

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-257-ga93244da

lcm-ansible Updated

0.17.1-2-g1e337f8

lcm-agent Updated

0.3.0-257-ga93244da

metallb-controller

0.9.3-1

metrics-server

0.5.2

StackLight

Alerta

8.5.0-20211108051042

Alertmanager

0.23.0

Alertmanager Webhook ServiceNow

0.1-20220420161450

Cerebro

0.9.3

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.14-20220111114545

Grafana

8.5.0

Grafana Image Renderer Updated

3.4.2

IAM Proxy

6.0.1

Metric Collector Updated

0.1-20220614110617

Metricbeat

7.10.2-20220309185937

OpenSearch Updated

1-20220517112057

OpenSearch Dashboards Updated

1-20220517112107

Patroni

13-2.1p1-20220225091552

Prometheus Updated

2.35.0

Prometheus Blackbox Exporter

0.19.0

Prometheus ES Exporter Updated

0.14.0-20220517111946

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

sf-notifier Updated

0.3-20220514051554

sf-reporter

0.1-20220419092138

Telegraf

1.9.1-20210225142050

1.20.2-20220204122426

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 7.9.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-831.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20220715144333

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook:v1.0.0-20220504194120


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.17.1-2-g1e337f8/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-257-ga93244da/lcm-agent

Helm charts Updated

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.32.4.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.32.4.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.32.4.tgz

Docker images

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-257-ga93244da

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/core/external/metrics-server:v0.5.2


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-9.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-37.tgz

fluentd-logs

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-128.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-150.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.12.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-6.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-50.tgz

opensearch-dashboards

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-40.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-42.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-228.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-8.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-2.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-3.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.7.2.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-5.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-5.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20220420161450

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.14-20220111114545

grafana

mirantis.azurecr.io/stacklight/grafana:8.5.0

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.4.2

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.15.9

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20220614110617

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20220309185937

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:1-20220517112057

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20220517112107

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v2.35.0

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.19.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20220517111946

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20220514051554

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20220419092138

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p1-20220225091552

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225142050

mirantis.azurecr.io/stacklight/telegraf:1.20.2-20220204122426

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq Updated

mirantis.azurecr.io/stacklight/yq:4.25.2


7.8.0

This section outlines release notes for the Cluster release 7.8.0 that is introduced in the Mirantis Container Cloud release 2.18.0.

This Cluster release supports Mirantis Kubernetes Engine 3.4.8 with Kubernetes 1.20 and Mirantis Container Runtime 20.10.11.

For the list of known and resolved issues, refer to the Container Cloud release 2.18.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 7.8.0 that is introduced in the Container Cloud release 2.18.0.


MKE and MCR version update

Updated the Mirantis Kubernetes Engine (MKE) version from 3.4.7 to 3.4.8 and the Mirantis Container Runtime (MCR) version from 20.10.8 to 20.10.11 for the Container Cloud management, regional, and managed clusters on all supported cloud providers, as well as for non Container Cloud based MKE cluster attachment.

Elasticsearch switch to OpenSearch

As part of the Elasticsearch switching to OpenSearch, removed the Elasticsearch and Kibana services, as well as introduced a set of new parameters that will replace the current ones in future releases. The old parameters are supported and take precedence over the new ones. For details, see Deprecation notes and StackLight configuration parameters.

Note

In the Container Cloud web UI, the Elasticsearch and Kibana naming is still present. However, the services behind them have switched to OpenSearch and OpenSearch Dashboards.

Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Added the MCCClusterUpdating informational alert that raises when the Mirantis Container Cloud cluster starts updating.

  • Enhanced StackLight alerting by clarifying alert severity levels. Switched all Minor alerts to Warning. Now, only alerts of the following severities exist: informational, warning, major, and critical.

  • Enhanced the documentation by adding troubleshooting guidelines for the Kubernetes applications, resources, and storage alerts.

Prometheus remote write

Implemented the capability to allow sending of metrics from Prometheus, using the Prometheus remote write feature to a custom monitoring endpoint.

StackLight mandatory parameters

Defined the following parameters in the StackLight configuration of the Cluster object for all types of clusters as mandatory. This applies to the clusters with StackLight enabled only. For existing clusters, Cluster object will be updated automatically.

Important

When creating a new cluster, specify these parameters through the Container Cloud web UI or as described in StackLight configuration parameters. Update all cluster templates created before Container Cloud 2.18.0 that do not have values for these parameters specified. Otherwise, the Admission Controller will reject cluster creation.

Web UI parameter

API parameter

Enable Logging

logging.enabled

HA Mode

highAvailabilityEnabled

Prometheus Persistent Volume Claim Size

prometheusServer.persistentVolumeClaimSize

Elasticsearch Persistent Volume Claim Size

elasticsearch.persistentVolumeClaimSize

Ceph daemons placement

Implemented the capability to configure the placement of the rook-ceph-operator, rook-discover, and csi-rbdplugin Ceph daemons.

Components versions

The following table lists the components versions of the Cluster release 7.8.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.4.8 0

Container runtime

Mirantis Container Runtime Updated

20.10.11 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.0.0-20220504194120

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-239-gae7218ea

lcm-ansible Updated

0.16.0-13-gcac49ca

lcm-agent Updated

0.3.0-239-gae7218ea

metallb-controller

0.9.3-1

metrics-server

0.5.2

StackLight

Alerta

8.5.0-20211108051042

Alertmanager

0.23.0

Alertmanager Webhook ServiceNow Updated

0.1-20220420161450

Cerebro

0.9.3

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.14-20220111114545

Grafana Updated

8.5.0

Grafana Image Renderer

3.2.1

IAM Proxy

6.0.1

Metric Collector

0.1-20220209123106

Metricbeat

7.10.2-20220309185937

OpenSearch

1-20220316161927

OpenSearch Dashboards

1-20220316161951

Patroni

13-2.1p1-20220225091552

Prometheus

2.31.1

Prometheus Blackbox Exporter

0.19.0

Prometheus ES Exporter

0.14.0-20220111114356

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

sf-notifier

0.3-20210930112115

sf-reporter Updated

0.1-20220419092138

Telegraf

1.9.1-20210225142050

1.20.2-20220204122426

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 7.8.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-792.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20220506180707

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook:v1.0.0-20220504194120


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.16.0-13-gcac49ca/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-239-gae7218ea/lcm-agent

Helm charts Updated

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.31.9.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.31.9.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.31.9.tgz

Docker images

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-239-gae7218ea

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.5.2


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow Updated

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch Removed

n/a

elasticsearch-curator Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-9.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-37.tgz

fluentd-elasticsearch Removed

n/a

fluentd-logs New

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-128.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-145.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.12.tgz

kibana Removed

n/a

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-6.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

opensearch New

https://binary.mirantis.com/stacklight/helm/opensearch-0.1.0-mcp-50.tgz

opensearch-dashboards New

https://binary.mirantis.com/stacklight/helm/opensearch-dashboards-0.1.0-mcp-40.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-42.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-225.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams Updated

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-8.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-2.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-3.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.6.1.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-5.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-5.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20220420161450

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.14-20220111114545

grafana Updated

mirantis.azurecr.io/stacklight/grafana:8.5.0

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.2.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.15.9

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20220209123106

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20220309185937

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

opensearch

mirantis.azurecr.io/stacklight/opensearch:1-20220316161927

opensearch-dashboards

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20220316161951

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.31.1

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.19.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20220111114356

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210930112115

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20220419092138

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.1p1-20220225091552

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

mirantis.azurecr.io/stacklight/telegraf:1.20.2-20220204122426

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


7.7.0

This section outlines release notes for the Cluster release 7.7.0 that is introduced in the Mirantis Container Cloud release 2.17.0.

This Cluster release supports Mirantis Kubernetes Engine 3.4.7 with Kubernetes 1.20 and Mirantis Container Runtime 20.10.8.

For the list of known and resolved issues, refer to the Container Cloud release 2.17.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 7.7.0 that is introduced in the Container Cloud release 2.17.0.


Elasticsearch retention time per index

Implemented the capability to configure the Elasticsearch retention time per logs, events, and notifications indices when creating a managed cluster through Container Cloud web UI.

The Retention Time parameter in the Container Cloud web UI is now replaced with the Logstash Retention Time, Events Retention Time, and Notifications Retention Time parameters.

Helm Controller monitoring

Implemented monitoring and added alerts for the Helm Controller service and the HelmBundle custom resources.

Configurable timeouts for Ceph requests

Implemented configurable timeouts for Ceph requests processing. The default is set to 30 minutes. You can configure the timeout using the pgRebalanceTimeoutMin parameter in the Ceph Helm chart.

Configurable replicas count for Ceph controllers

Implemented the capability to configure the replicas count for cephController, cephStatus, and cephRequest controllers using the replicas parameter in the Ceph Helm chart. The default is set to 3 replicas.

Ceph KaaSCephCluster Controller

Implemented a separate ceph-kcc-controller that runs on a management cluster and manages the KaaSCephCluster custom resource (CR). Previously, the KaaSCephCluster CR was managed by bm-provider.

Learn more

Ceph overview

Components versions

The following table lists the components versions of the Cluster release 7.7.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.4.7 0

Container runtime

Mirantis Container Runtime Updated

20.10.8 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook Updated

1.0.0-20220504194120

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-229-g4774bbbb

lcm-ansible Updated

0.15.0-24-gf023ea1

lcm-agent Updated

0.3.0-229-g4774bbbb

metallb-controller

0.9.3-1

metrics-server

0.5.2

StackLight

Alerta

8.5.0-20211108051042

Alertmanager

0.23.0

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.14-20220111114545

Grafana

8.2.7

Grafana Image Renderer

3.2.1

IAM Proxy

6.0.1

Metric Collector

0.1-20220209123106

Metricbeat Updated

7.10.2-20220309185937

OpenSearch Updated

1-20220316161927

OpenSearch Dashboards Updated

1-20220316161951

Patroni Updated

13-2.1p1-20220225091552

Prometheus

2.31.1

Prometheus Blackbox Exporter

0.19.0

Prometheus ES Exporter

0.14.0-20220111114356

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

sf-notifier

0.3-20210930112115

sf-reporter

0.1-20210607111404

Telegraf

1.9.1-20210225142050

Updated

1.20.2-20220204122426

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 7.7.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-719.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20220421152918

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook Updated

mirantis.azurecr.io/ceph/rook:v1.0.0-20220504194120


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.15.0-24-gf023ea1/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-229-g4774bbbb/lcm-agent

Helm charts Updated

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.30.6.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.30.6.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.30.6.tgz

Docker images

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-229-g4774bbbb

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.5.2


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-1.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-45.tgz

elasticsearch-curator Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-8.tgz

elasticsearch-exporter Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-6.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-36.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-123.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-130.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.12.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-36.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-4.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-42.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-218.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-1.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-1.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.5.3.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-4.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-4.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.14-20220111114545

grafana

mirantis.azurecr.io/stacklight/grafana:8.2.7

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.2.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20220209123106

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20220309185937

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

opensearch Updated

mirantis.azurecr.io/stacklight/opensearch:1-20220316161927

opensearch-dashboards Updated

mirantis.azurecr.io/stacklight/opensearch-dashboards:1-20220316161951

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.31.1

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.19.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20220111114356

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210930112115

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20210607111404

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p1-20220225091552

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

Updated

mirantis.azurecr.io/stacklight/telegraf:1.20.2-20220204122426

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


7.6.0

This section outlines release notes for the Cluster release 7.6.0 that is introduced in the Mirantis Container Cloud release 2.16.0.

This Cluster release supports Mirantis Kubernetes Engine 3.4.7 with Kubernetes 1.20 and Mirantis Container Runtime 20.10.8.

For the list of known and resolved issues, refer to the Container Cloud release 2.16.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 7.6.0 that is introduced in the Container Cloud release 2.16.0.


MKE version update from 3.4.6 to 3.4.7

Updated the Mirantis Kubernetes Engine (MKE) major version from 3.4.6 to 3.4.7 for the Container Cloud management, regional, and managed clusters. Also, added support for attachment of existing MKE 3.4.7 clusters.

Improvements to StackLight alerting

Added the KubePodsRegularLongTermRestarts alert that raises in case of a long-term periodic restart of containers.

Elasticsearch retention time per index

Implemented the capability to configure the Elasticsearch retention time per index using the elasticsearch.retentionTime parameter in the StackLight Helm chart. Now, you can configure different retention periods for different indices: logs, events, and notifications.

The elasticsearch.logstashRetentionTime parameter is now deprecated.

Prometheus Blackbox Exporter configuration

Implemented the capability to configure Prometheus Blackbox Exporter, including customModules and timeoutOffset, through the StackLight Helm chart.

Custom Prometheus scrape configurations

Implemented the capability to define custom Prometheus scrape configurations.

Elasticsearch switch to OpenSearch

Due to licensing changes for Elasticsearch, Mirantis Container Cloud has switched from using Elasticsearch to OpenSearch and Kibana has switched to OpenSearch Dashboards. OpenSearch is a fork of Elasticsearch under the open-source Apache License with development led by Amazon Web Services.

For new deployments with the logging stack enabled, OpenSearch is now deployed by default. For existing deployments, migration to OpenSearch is performed automatically during clusters update. However, the entire Elasticsearch cluster may go down for up to 15 minutes.

Components versions

The following table lists the components versions of the Cluster release 7.6.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.4.7 0

Container runtime

Mirantis Container Runtime Updated

20.10.8 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.7.6

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-187-gba894556

lcm-ansible Updated

0.14.0-14-geb6a51f

lcm-agent Updated

0.3.0-187-gba894556

metallb-controller

0.9.3-1

metrics-server Updated

0.5.2

StackLight

Alerta

8.5.0-20211108051042

Alertmanager

0.23.0

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch Removed

n/a

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd Updated

1.14-20220111114545

Grafana

8.2.7

Grafana Image Renderer

3.2.1

IAM Proxy

6.0.1

Kibana Removed

n/a

Metric Collector Updated

0.1-20220209123106

Metricbeat Updated

7.10.2-20220111114624

OpenSearch New

1.2-20220114131142

OpenSearch Dashboards New

1.2-20220114131222

Patroni Updated

13-2.1p1-20220131130853

Prometheus

2.31.1

Prometheus Blackbox Exporter Updated

0.19.0

Prometheus ES Exporter Updated

0.14.0-20220111114356

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

sf-notifier

0.3-20210930112115

sf-reporter

0.1-20210607111404

Telegraf

1.9.1-20210225142050

1.20.0-20210927090119

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 7.6.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-661.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20220203124822

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.7.6


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.14.0-14-geb6a51f/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-187-gba894556/lcm-agent

Helm charts Updated

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.29.6.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.29.6.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.29.6.tgz

Docker images

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-187-gba894556

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server Updated

mirantis.azurecr.io/lcm/metrics-server-amd64:v0.5.2


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-1.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-44.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-36.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-120.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-125.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.12.tgz

kibana Updated

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-36.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-4.tgz

metricbeat Updated

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-16.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-38.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-218.tgz

prometheus-blackbox-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-11.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-1.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-1.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.4.3.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-4.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-4.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch Removed

n/a

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.14-20220111114545

grafana

mirantis.azurecr.io/stacklight/grafana:8.2.7

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.2.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana Removed

n/a

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20220209123106

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20220111114624

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

opensearch New

mirantis.azurecr.io/stacklight/opensearch:1.2-20220114131142

opensearch-dashboards New

mirantis.azurecr.io/stacklight/opensearch-dashboards:1.2-20220114131222

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.31.1

prometheus-blackbox-exporter Updated

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.19.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20220111114356

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210930112115

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20210607111404

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.1p1-20220131130853

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

mirantis.azurecr.io/stacklight/telegraf:1.20.0-20210927090119

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


7.5.0

This section outlines release notes for the Cluster release 7.5.0 that is introduced in the Mirantis Container Cloud release 2.15.0.

This Cluster release supports Mirantis Kubernetes Engine 3.4.6 with Kubernetes 1.20 and Mirantis Container Runtime 20.10.8.

For the list of known and resolved issues, refer to the Container Cloud release 2.15.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 7.5.0 that is introduced in the Container Cloud release 2.15.0.


MCR version update

Updated the Mirantis Container Runtime (MCR) version from 20.10.6 to 20.10.8 for the Container Cloud management, regional, and managed clusters on all supported cloud providers.

Mirantis Container Cloud alerts

Implemented the MCCLicenseExpirationCritical and MCCLicenseExpirationMajor alerts that notify about Mirantis Container Cloud license expiration in less than 10 and 30 days.

Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Enhanced Kubernetes applications alerting:

    • Reworked the Kubernetes applications alerts to minimize flapping, avoid firing during pod rescheduling, and to detect crash looping for pods that restart less frequently.

    • Added the KubeDeploymentOutage, KubeStatefulSetOutage, and KubeDaemonSetOutage alerts.

    • Removed the redundant KubeJobCompletion alert.

    • Enhanced the alert inhibition rules to reduce alert flooding.

    • Improved alert descriptions.

  • Split TelemeterClientFederationFailed into TelemeterClientFailed and TelemeterClientHAFailed to separate alerts depending on the HA mode disabled or enabled.

  • Updated the description for DockerSwarmNodeFlapping.

Node Exporter collectors

Disabled unused Node Exporter collectors and implemented the capability to manually enable needed collectors using the nodeExporter.extraCollectorsEnabled parameter. Only the following collectors are now enabled by default in StackLight:

  • arp

  • conntrack

  • cpu

  • diskstats

  • entropy

  • filefd

  • filesystem

  • hwmon

  • loadavg

  • meminfo

  • netdev

  • netstat

  • nfs

  • stat

  • sockstat

  • textfile

  • time

  • timex

  • uname

  • vmstat

Automated Ceph LCM

Implemented full support for automated Ceph LCM operations using the KaaSCephOperationRequest CR, such as addition or removal of Ceph OSDs and nodes, as well as replacement of failed Ceph OSDs or nodes.

Learn more

Automated Ceph LCM

Ceph CSI provisioner tolerations and node affinity

Implemented the capability to specify Container Storage Interface (CSI) provisioner tolerations and node affinity for different Rook resources. Added support for the all and mds keys in toleration rules.

Ceph KaaSCephCluster.status enhancement

Extended the fullClusterInfo section of the KaaSCephCluster.status resource with the following fields:

  • cephDetails - contains verbose details of a Ceph cluster state

  • cephCSIPluginDaemonsStatus - contains details on all Ceph CSIs

Ceph Shared File System (CephFS)

TechPreview

Implemented the capability to enable the Ceph Shared File System, or CephFS, to create read/write shared file system Persistent Volumes (PVs).

Components versions

The following table lists the components versions of the Cluster release 7.5.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.4.6 0

Container runtime

Mirantis Container Runtime Updated

20.10.8 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.7.6

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-132-g83a348fa

lcm-ansible Updated

0.13.0-26-gad73ff7

lcm-agent Updated

0.3.0-132-g83a348fa

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.5.0-20211108051042

Alertmanager Updated

0.23.0

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch

7.10.2-2021110210112

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20210915110132

Grafana Updated

8.2.7

Grafana Image Renderer

3.2.1

IAM Proxy

6.0.1

Kibana

7.10.2-20211101074638

Metric Collector

0.1-20211109121134

Metricbeat

7.10.2-20211103140113

Patroni

13-2.0p6-20210525081943

Prometheus Updated

2.31.1

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.14.0-20210812120726

Prometheus MS Teams

1.4.2

Prometheus Node Exporter Updated

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

Pushgateway Removed

n/a

sf-notifier

0.3-20210930112115

sf-reporter

0.1-20210607111404

Telegraf

1.9.1-20210225142050

1.20.0-20210927090119

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 7.5.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-606.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20220110132813

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.7.6


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.13.0-26-gad73ff7/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-132-g83a348fa/lcm-agent

Helm charts Updated

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.28.7.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.28.7.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.28.7.tgz

Docker images

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-132-g83a348fa

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta Updated

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-1.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-37.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-32.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-115.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-121.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.10.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-30.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-3.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-12.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-36.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-214.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-7.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-1.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-1.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.3.1.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-1.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-1.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20211102101126

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210915110132

grafana Updated

mirantis.azurecr.io/stacklight/grafana:8.2.7

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.2.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana

mirantis.azurecr.io/stacklight/kibana:7.10.2-20211101074638

kube-state-metrics Updated

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20211109121134

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20211103140113

node-exporter Updated

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v2.31.1

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20210812120726

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway Removed

n/a

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210930112115

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20210607111404

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

mirantis.azurecr.io/stacklight/telegraf:1.20.0-20210927090119

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


7.4.0

This section outlines release notes for the Cluster release 7.4.0 that is introduced in the Mirantis Container Cloud release 2.14.0.

This Cluster release supports Mirantis Kubernetes Engine 3.4.6 with Kubernetes 1.20 and Mirantis Container Runtime 20.10.6.

For the list of known and resolved issues, refer to the Container Cloud release 2.14.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 7.4.0 that is introduced in the Container Cloud release 2.14.0.


MKE version update from 3.4.5 to 3.4.6

Updated the Mirantis Kubernetes Engine version from 3.4.5 to 3.4.6 for the Container Cloud management, regional, and managed clusters. Also, added support for attachment of existing MKE 3.4.6 clusters.

Network interfaces monitoring

Limited the number of monitored network interfaces to prevent extended Prometheus RAM consumption in big clusters. By default, Prometheus Node Exporter now only collects information of a basic set of interfaces, both host and container. If required you can edit the list of excluded devices as needed.

Custom Prometheus recording rules

Implemented the capability to define custom Prometheus recording rules through the prometheusServer.customRecordingRules parameter in the StackLight Helm chart. Overriding of existing recording rules is not supported.

Syslog packet size configuration

Implemented the capability to configure packet size for the syslog logging output. If remote logging to syslog is enabled in StackLight, use the logging.syslog.packetSize parameter in the StackLight Helm chart to configure the packet size.

Prometheus Relay configuration

Implemented the capability to configure the Prometheus Relay client timeout and response size limit through the prometheusRelay.clientTimeout and prometheusRelay.responseLimitBytes parameters in the StackLight Helm chart.

Ceph networks validation

Implemented additional validation of networks specified in spec.cephClusterSpec.network.publicNet and spec.cephClusterSpec.network.clusterNet and prohibited the use of the 0.0.0.0/0 CIDR. Now, the bare metal provider automatically translates the 0.0.0.0/0 network range to the default LCM IPAM subnet if it exists.

You can now also add corresponding labels for the bare metal IPAM subnets when configuring the Ceph cluster during the management cluster deployment.

Enhanced Ceph architecture

To improve debugging and log reading, separated Ceph Controller, Ceph Status Controller, and Ceph Request Controller, which used to run in one pod, into three different deployments.

Automated Ceph OSD removal

TechPreview

Implemented the KaaSCephOperationRequest CR that provides LCM operations for Ceph OSDs and nodes by automatically creating separate CephOsdRemoveRequest requests. It allows for automated removal of healthy or non-healthy Ceph OSDs from a Ceph cluster.

Due to the Technology Preview status of the feature, Mirantis recommends following Remove Ceph OSD manually for Ceph OSDs removal.

Learn more

Manage Ceph

Components versions

The following table lists the components versions of the Cluster release 7.4.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.4.6 0

Container runtime

Mirantis Container Runtime

20.10.6 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook Updated

1.7.6

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-104-gb7f5e8d8

lcm-ansible Updated

0.12.0-6-g5329efe

lcm-agent Updated

0.3.0-104-gb7f5e8d8

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta Updated

8.5.0-20211108051042

Alertmanager

0.22.2

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch Updated

7.10.2-2021110210112

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20210915110132

Grafana Updated

8.2.2

Grafana Image Renderer Updated

3.2.1

IAM Proxy

6.0.1

Kibana Updated

7.10.2-20211101074638

Metric Collector Updated

0.1-20211109121134

Metricbeat Updated

7.10.2-20211103140113

Patroni

13-2.0p6-20210525081943

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.14.0-20210812120726

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier

0.3-20210930112115

sf-reporter

0.1-20210607111404

Telegraf

1.9.1-20210225142050

1.20.0-20210927090119

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 7.4.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-526.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20211109132703

cephcsi Updated

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook Updated

mirantis.azurecr.io/ceph/rook/ceph:v1.7.6


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.12.0-6-g5329efe/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-104-gb7f5e8d8/lcm-agent

Helm charts

managed-lcm-api Updated

https://binary.mirantis.com/core/helm/managed-lcm-api-1.27.6.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.27.6.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.27.6.tgz

Docker images

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-104-gb7f5e8d8

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-22.tgz

alertmanager-webhook-servicenow Updated

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-1.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-37.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-32.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-112.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-115.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.9.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-30.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-1.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-12.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-36.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-208.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-7.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-1.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-1.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.2.5.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-1.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-1.tgz

Docker images

alerta Updated

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.22.2

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch Updated

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20211102101126

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210915110132

grafana Updated

mirantis.azurecr.io/stacklight/grafana:8.2.2

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.2.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana Updated

mirantis.azurecr.io/stacklight/kibana:7.10.2-20211101074638

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20211109121134

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20211103140113

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20210812120726

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210930112115

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20210607111404

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

mirantis.azurecr.io/stacklight/telegraf:1.20.0-20210927090119

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


7.3.0

This section outlines release notes for the Cluster release 7.3.0 that is introduced in the Mirantis Container Cloud release 2.13.0.

This Cluster release supports Mirantis Kubernetes Engine 3.4.5 with Kubernetes 1.20 and Mirantis Container Runtime 20.10.6.

For the list of known and resolved issues, refer to the Container Cloud release 2.13.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 7.3.0 that is introduced in the Container Cloud release 2.13.0.


Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Implemented per-service *TargetDown and *TargetsOutage alerts that raise if one or all Prometheus targets are down.

  • Enhanced the alert inhibition rules to reduce alert flooding.

  • Removed the following inefficient alerts:

    • TargetDown

    • TargetFlapping

    • KubeletDown

    • ServiceNowWebhookReceiverDown

    • SfNotifierDown

    • PrometheusMsTeamsDown

Components versions

The following table lists the components versions of the Cluster release 7.3.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.4.5 0

Container runtime

Mirantis Container Runtime

20.10.6 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.6.8

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-67-g25ab9f1a

lcm-ansible Updated

0.11.0-6-gbfce76e

lcm-agent Updated

0.3.0-67-g25ab9f1a

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.4.1-20210707092546

Alertmanager

0.22.2

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch

7.10.2-20210601104922

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd Updated

1.10.2-20210915110132

Grafana Updated

8.1.2

Grafana Image Renderer

2.0.1

IAM Proxy

6.0.1

Kibana

7.10.2-20210601104911

Metric Collector

0.1-20210219112938

Metricbeat

7.10.2

Patroni

13-2.0p6-20210525081943

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.14.0-20210812120726

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier Updated

0.3-20210930112115

sf-reporter New

0.1-20210607111404

Telegraf

1.9.1-20210225142050

New 1.20.0-20210927090119

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 7.3.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-427.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20211013104642

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.3.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.6.8


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.11.0-6-gbfce76e/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-67-g25ab9f1a/lcm-agent

Helm charts

managed-lcm-api Updated

https://binary.mirantis.com/core/helm/managed-lcm-api-1.26.6.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.26.6.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.24.6.tgz

Docker images

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-67-g25ab9f1a

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-22.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.1.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-37.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-32.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-105.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-110.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.9.tgz

kibana Updated

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-30.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-12.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-12.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-34.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-202.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-7.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-16.tgz

sf-reporter New

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-13.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-807.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-19.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-19.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.4.1-20210707092546

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.22.2

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20210601104922

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210915110132

grafana Updated

mirantis.azurecr.io/stacklight/grafana:8.1.2

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana

mirantis.azurecr.io/stacklight/kibana:7.10.2-20210601104911

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20210219112938

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20210812120726

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210930112115

sf-reporter New

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20210607111404

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

New mirantis.azurecr.io/stacklight/telegraf:1.20.0-20210927090119

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


7.2.0

This section outlines release notes for the Cluster release 7.2.0 that is introduced in the Mirantis Container Cloud release 2.12.0.

This Cluster release supports Mirantis Kubernetes Engine 3.4.5 with Kubernetes 1.20 and Mirantis Container Runtime 20.10.6.

For the list of known and resolved issues, refer to the Container Cloud release 2.12.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 7.2.0 that is introduced in the Container Cloud release 2.12.0.


MCR and MKE versions update

Updated the Mirantis Container Runtime (MCR) version from 20.10.5 to 20.10.6 and Mirantis Kubernetes Engine (MKE) version from 3.4.0 to 3.4.5 for the Container Cloud management, regional, and managed clusters. Also, added support for attachment of existing MKE clusters 3.3.7-3.3.12 and 3.4.1-3.4.5.

For the MCR release highlights and components versions, see MCR documentation: MCR release notes and MKE documentation: MKE release notes.

Ceph maintenance improvement

Integrated the Ceph maintenance to the common upgrade procedure. Now, the maintenance flag function is set up programmatically and the flag itself is deprecated.

Ceph RADOS Gateway tolerations

Technology Preview

Implemented the capability to specify RADOS Gateway tolerations through the KaaSCephCluster spec using the native Rook way for setting resource requirements for Ceph daemons.

Short names for Kubernetes nodes in Grafana dashboards

Enhanced the Grafana dashboards to display user-friendly short names for Kubernetes nodes, for example, master-0, instead of long name labels such as kaas-node-f736fc1c-3baa-11eb-8262-0242ac110002. This feature provides for consistency with Kubernetes nodes naming in the Container Cloud web UI.

All Grafana dashboards that present node data now have an additional Node identifier drop-down menu. By default, it is set to machine to display short names for Kubernetes nodes. To display Kubernetes node name labels as previously, change this option to node.

Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Enhanced the alert inhibition rules.

  • Reworked a number of alerts to improve alerting efficiency and reduce alert flooding.

  • Removed the inefficient DockerSwarmLeadElectionLoop and SystemDiskErrorsTooHigh alerts.

  • Added the matchers key to the routes configuration. Deprecated the match and match_re keys.

Logs-based metrics in StackLight

Implemented the capability to create custom logs-based metrics that you can use to configure StackLight notifications.

Components versions

The following table lists the components versions of the Cluster release 7.2.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration Updated

Mirantis Kubernetes Engine

3.4.5 0

Container runtime Updated

Mirantis Container Runtime

20.10.6 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.6.8

LCM

descheduler Removed

n/a

Helm

2.16.11-40

helm-controller Updated

0.3.0-32-gee08c2b8

lcm-ansible Updated

0.10.0-12-g7cd13b6

lcm-agent Updated

0.3.0-32-gee08c2b8

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.4.1-20210707092546

Alertmanager

0.22.2

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch

7.10.2-20210601104922

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20210602174807

Grafana

7.5.4

Grafana Image Renderer

2.0.1

IAM Proxy

6.0.1

Kibana

7.10.2-20210601104911

Metric Collector

0.1-20210219112938

Metricbeat

7.10.2

Patroni

13-2.0p6-20210525081943

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter Updated

0.14.0-20210812120726

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier

0.3-20210702081359

Telegraf

1.9.1-20210225142050

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 7.2.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-409.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20210921155643

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.3.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner Updated

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.6.8


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.10.0-12-g7cd13b6/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-32-gee08c2b8/lcm-agent

Helm charts

descheduler Removed

n/a

managed-lcm-api Updated

https://binary.mirantis.com/core/helm/managed-lcm-api-1.25.6.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.25.6.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.24.6.tgz

Docker images

descheduler Removed

n/a

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-32-gee08c2b8

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-22.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.1.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-36.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-32.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-97.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-110.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.8.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-29.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-12.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-12.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-34.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-201.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-7.tgz

prometheus-es-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-16.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-595.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s Updated

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-19.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-19.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.4.1-20210707092546

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.22.2

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20210601104922

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210602174807

grafana

mirantis.azurecr.io/stacklight/grafana:7.5.4

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana

mirantis.azurecr.io/stacklight/kibana:7.10.2-20210601104911

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20210219112938

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20210812120726

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210702081359

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


7.1.0

This section outlines release notes for the Cluster release 7.1.0 that is introduced in the Mirantis Container Cloud release 2.11.0.

This Cluster release supports Mirantis Kubernetes Engine 3.4.0 with Kubernetes 1.20 and Mirantis Container Runtime 20.10.5.

For the list of known and resolved issues, refer to the Container Cloud release 2.11.0 section.

Enhancements

This section outlines new features implemented in the Cluster release 7.1.0 that is introduced in the Container Cloud release 2.11.0.


Ceph Octopus

Upgraded Ceph from 14.2.19 (Nautilus) to 15.2.13 (Octopus) and Rook from 1.5.9 to 1.6.8.

Hyperconverged Ceph improvement

Technology Preview

Implemented the capability to define Ceph tolerations and resources management through the KaaSCephCluster spec using the native Rook way for setting resource requirements for Ceph daemons.

Ceph cluster status

Improved the MiraCephLog custom resource by adding more information about all Ceph cluster entities and their statuses. The MiraCeph, MiraCephLog statuses and MiraCephLog values are now integrated to KaaSCephCluster.status and can be viewed using the miraCephInfo, shortClusterInfo, and fullClusterInfo fields.

Ceph Manager modules

Implemented the capability to define a list of Ceph Manager modules to enable on the Ceph cluster using the mgr.modules parameter in KaaSCephCluster.

StackLight node labeling improvements

Implemented the following improvements for the StackLight node labeling during a cluster creation or post-deployment configuration:

  • Added a verification that a cluster contains minimum 3 worker nodes with the StackLight label for clusters with StackLight deployed in HA mode. This verification applies to cluster deployment and update processes. For details on how to add the StackLight label before upgrade to the latest Cluster releases of Container Cloud 2.11.0, refer to Upgrade managed clusters with StackLight deployed in HA mode.

  • Added a notification about the minimum number of worker nodes with the StackLight label for HA StackLight deployments to the cluster live status description in the Container Cloud web UI.

Caution

Removal of the StackLight label from worker nodes along with removal of worker nodes with StackLight label can cause the StackLight components to become inaccessible. It is important to keep the worker nodes where the StackLight local volumes were provisioned.

StackLight log level severity setting in web UI

Implemented the capability to set the default log level severity for all StackLight components as well as set a custom log level severity for specific StackLight components in the Container Cloud web UI. You can update this setting either during a managed cluster creation or during a post-deployment configuration.

Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Added the following alerts:

    • KubeContainersCPUThrottlingHigh that raises in case of containers CPU throttling.

    • KubeletDown that raises if kubelet is down.

  • Reworked the alert inhibition rules.

  • Reworked a number of alerts to improve alerting efficiency and reduce alert flooding.

  • Removed the following inefficient alerts:

    • FileDescriptorUsageCritical

    • KubeCPUOvercommitNamespaces

    • KubeMemOvercommitNamespaces

    • KubeQuotaExceeded

    • ContainerScrapeError

Salesforce feed update

Implemented the capability to enable feed update in Salesforce using the feed_enabled parameter. By default, this parameter is set to false to save API calls.

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added a procedure on how to manually remove a Ceph OSD from a Ceph cluster.

Components versions

The following table lists the components versions of the Cluster release 7.1.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.4.0 0

Container runtime

Mirantis Container Runtime

20.10.5 1

Distributed storage Updated

Ceph

15.2.13 (Octopus)

Rook

1.6.8

LCM

descheduler

0.8.0

Helm

2.16.11-40

helm-controller Updated

0.2.0-399-g85be100f

lcm-ansible Updated

0.9.0-17-g28bc9ce

lcm-agent Updated

0.2.0-399-g85be100f

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta Updated

8.4.1-20210707092546

Alertmanager

0.22.2

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch

7.10.2-20210601104922

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20210602174807

Grafana

7.5.4

Grafana Image Renderer

2.0.1

IAM Proxy

6.0.1

Kibana

7.10.2-20210601104911

Metric Collector

0.1-20210219112938

Metricbeat

7.10.2

Patroni

13-2.0p6-20210525081943

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.5.1-20210323132924

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter Updated

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier Updated

0.3-20210702081359

Telegraf

1.9.1-20210225142050

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 7.1.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-368.tgz

Docker images

ceph Updated

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20210807103257

cephcsi Updated

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.3.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner Updated

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook Updated

mirantis.azurecr.io/ceph/rook/ceph:v1.6.8


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.9.0-17-g28bc9ce/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-399-g85be100f/lcm-agent

Helm charts Updated

descheduler

https://binary.mirantis.com/core/helm/descheduler-1.24.6.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.24.6.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.24.6.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.24.6.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-399-g85be100f

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-22.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.1.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-36.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-30.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-96.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-108.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.3.tgz

kibana Updated

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-29.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-12.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-12.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-33.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-188.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-7.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-10.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-16.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-574.tgz

telegraf-ds Updated

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s Updated

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-29.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-17.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-17.tgz

Docker images

alerta Updated

mirantis.azurecr.io/stacklight/alerta-web:8.4.1-20210707092546

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.22.2

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20210601104922

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210602174807

grafana

mirantis.azurecr.io/stacklight/grafana:7.5.4

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana

mirantis.azurecr.io/stacklight/kibana:7.10.2-20210601104911

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20210219112938

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20210323132924

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210702081359

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


7.0.0

This section outlines release notes for the Cluster release 7.0.0 that is introduced in the Mirantis Container Cloud release 2.10.0.

This Cluster release introduces support for the updated versions of Mirantis Kubernetes Engine 3.4.0 with Kubernetes 1.20 and Mirantis Container Runtime 20.10.5.

For the list of known and resolved issues, refer to the Container Cloud release 2.10.0 section.

Enhancements

This section outlines new features introduced in the Cluster release 7.0.0 that is the initial release of the 7.x Cluster release series.


Updated version of MCR, MKE, and Kubernetes

The 7.0.0 Cluster release introduces support for the updated versions of:

  • Mirantis Container Runtime (MCR) 20.10.5

  • Mirantis Kubernetes Engine (MKE) 3.4.0

  • Kubernetes 1.20.1

All existing management and regional clusters with the Cluster release 5.16.0 are automatically updated to the Cluster release 7.0.0 with the updated versions of MCR, MKE, and Kubernetes.

Once you update your existing managed clusters from the Cluster release 5.16.0 to 5.17.0, an update to the Cluster release 7.0.0 becomes available through the Container Cloud web UI menu.

Graceful MCR upgrade

Implemented a graceful Mirantis Container Runtime (MCR) upgrade from 19.03.14 to 20.10.5 on existing Container Cloud clusters.

MKE logs gathering enhancements

Improved the MKE logs gathering by replacing the default DEBUG logs level with INFO. This change reduces the unnecessary load on the MKE cluster caused by an excessive amount of logs generated with the DEBUG level enabled.

Log verbosity for StackLight components

Implemented the capability to configure the verbosity level of logs produced by all StackLight components or by each component separately.

Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Added the following alerts:

    • PrometheusMsTeamsDown that raises if prometheus-msteams is down.

    • ServiceNowWebhookReceiverDown that raises if alertmanager-webhook-servicenow is down.

    • SfNotifierDown that raises if the sf-notifier is down.

    • KubeAPICertExpirationMajor, KubeAPICertExpirationWarning, MKEAPICertExpirationMajor, MKEAPICertExpirationWarning that inform on SSL certificates expiration.

  • Removed the inefficient PostgresqlPrimaryDown alert.

  • Reworked a number of alerts to improve alerting efficiency and reduce alert flooding.

  • Reworked the alert inhibition rules to match the receivers.

  • Updated Alertmanager to v0.22.2.

  • Changed the default behavior of the Salesforce alerts integration. Now, by default, only Critical alerts will be sent to the Salesforce.

Proxy configuration on existing clusters

Implemented the capability to add or configure proxy on existing Container Cloud managed clusters using the Container Cloud web UI.

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added a procedure on how to move a Ceph Monitor daemon to another node.

Components versions

The following table lists the components versions of the Cluster release 7.0.0.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.4.0 0

Container runtime

Mirantis Container Runtime

20.10.5 1

Distributed storage

Ceph

14.2.19 (Nautilus)

Rook

1.5.9

LCM

descheduler

0.8.0

Helm

2.16.11-40

helm-controller

0.2.0-372-g7e042f4d

lcm-ansible

0.8.0-17-g63ec424

lcm-agent

0.2.0-373-gae771bb4

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.4.1-20210312131419

Alertmanager

0.22.2

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch

7.10.2-20210601104922

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20210602174807

Grafana

7.5.4

Grafana Image Renderer

2.0.1

IAM Proxy

6.0.1

Kibana

7.10.2-20210601104911

Metric Collector

0.1-20210219112938

Metricbeat

7.10.2

Patroni

13-2.0p6-20210525081943

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.5.1-20210323132924

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20200428121305

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier

0.3-20210617140951

sf-reporter

0.1-20210607111404

Telegraf

1.9.1-20210225142050

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 7.0.0.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-305.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v14.2.19

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20210716222903

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.2.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.1

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.5.9


LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.8.0-17-g63ec424/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-373-gae771bb4/lcm-agent

Helm charts

descheduler

https://binary.mirantis.com/core/helm/descheduler-1.23.2.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.23.2.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.23.2.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.23.2.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-372-g7e042f4d

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-22.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.1.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-33.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-25.tgz

fluentd-elasticsearch

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-93.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-105.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-27.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-12.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-12.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-30.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-158.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-7.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-10.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-16.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-13.tgz

stacklight

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-538.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-20.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-20.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-16.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-16.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.4.1-20210312131419

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.22.2

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20210601104922

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210602174807

grafana

mirantis.azurecr.io/stacklight/grafana:7.5.4

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.2

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana

mirantis.azurecr.io/stacklight/kibana:7.10.2-20210601104911

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20210219112938

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20210323132924

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20200428121305

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210617140951

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20210607111404

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


6.x series

This section outlines release notes for the unsupported Cluster releases of the 6.x series.

6.20.0

The Cluster release 6.20.0 is introduced in the Mirantis Container Cloud release 2.13.1. This Cluster release is based on the Cluster release 5.20.0.

The Cluster release 6.20.0 supports:

  • Mirantis OpenStack for Kubernetes (MOS) 21.6. For details, see MOS Release Notes.

  • Mirantis Kubernetes Engine (MKE) 3.3.12. For details, see MKE Release Notes.

  • Mirantis Container Runtime (MCR) 20.10.6. For details, see MCR Release Notes.

  • Kubernetes 1.18.

For the list of addressed and known issues, refer to the Container Cloud release 2.13.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 6.20.0.


Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Implemented per-service *TargetDown and *TargetsOutage alerts that raise if one or all Prometheus targets are down.

  • Enhanced the alert inhibition rules to reduce alert flooding.

  • Removed the following inefficient alerts:

    • TargetDown

    • TargetFlapping

    • KubeletDown

    • ServiceNowWebhookReceiverDown

    • SfNotifierDown

    • PrometheusMsTeamsDown

Components versions

The following table lists the components versions of the Cluster release 6.20.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Components versions of the Cluster release 6.20.0

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.3.12 0

Container runtime

Mirantis Container Runtime

20.10.6 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.6.8

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-67-g25ab9f1a

lcm-ansible Updated

0.11.0-6-gbfce76e

lcm-agent Updated

0.3.0-67-g25ab9f1a

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.4.1-20210707092546

Alertmanager

0.22.2

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch

7.10.2-20210601104922

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd Updated

1.10.2-20210915110132

Grafana Updated

8.1.2

Grafana Image Renderer

2.0.1

IAM Proxy

6.0.1

Kibana

7.10.2-20210601104911

Metric Collector

0.1-20210219112938

Metricbeat

7.10.2

Patroni

13-2.0p6-20210525081943

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.14.0-20210812120726

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier Updated

0.3-20210930112115

sf-reporter New

0.1-20210607111404

Telegraf

1.9.1-20210225142050

New 1.20.0-20210927090119

Telemeter

4.4.0-20200424

0

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 6.20.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-427.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20211013104642

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.3.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.6.8


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.11.0-6-gbfce76e/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-67-g25ab9f1a/lcm-agent

Helm charts

managed-lcm-api Updated

https://binary.mirantis.com/core/helm/managed-lcm-api-1.26.6.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.26.6.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.24.6.tgz

Docker images

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-67-g25ab9f1a

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-22.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.1.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-37.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-32.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-105.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-110.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.9.tgz

kibana Updated

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-30.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-12.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-12.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-34.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-202.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-7.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-16.tgz

sf-reporter New

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-13.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-807.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-19.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-19.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.4.1-20210707092546

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.22.2

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20210601104922

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210915110132

grafana Updated

mirantis.azurecr.io/stacklight/grafana:8.1.2

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana

mirantis.azurecr.io/stacklight/kibana:7.10.2-20210601104911

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20210219112938

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20210812120726

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210930112115

sf-reporter New

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20210607111404

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

New mirantis.azurecr.io/stacklight/telegraf:1.20.0-20210927090119

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


6.19.0

The Cluster release 6.19.0 is introduced in the Mirantis Container Cloud release 2.12.0. This Cluster release is based on the Cluster release 5.19.0.

The Cluster release 6.19.0 supports:

  • Mirantis OpenStack for Kubernetes (MOS) 21.5. For details, see MOS Release Notes.

  • Mirantis Kubernetes Engine (MKE) 3.3.12. For details, see MKE Release Notes.

  • Mirantis Container Runtime (MCR) 20.10.6. For details, see MCR Release Notes.

  • Kubernetes 1.18.

For the list of addressed and known issues, refer to the Container Cloud release 2.12.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 6.19.0.


MCR and MKE versions update

Updated the Mirantis Container Runtime (MCR) version from 20.10.5 to 20.10.6 and Mirantis Kubernetes Engine version from 3.3.6 to 3.3.12 for the Container Cloud management, regional, and managed clusters. Also, added support for attachment of existing MKE clusters 3.3.7-3.3.12 and 3.4.1-3.4.5.

For the MCR release highlights and components versions, see MCR documentation: MCR release notes and MKE documentation: MKE release notes.

Ceph maintenance improvement

Integrated the Ceph maintenance to the common upgrade procedure. Now, the maintenance flag function is set up programmatically and the flag itself is deprecated.

Ceph RADOS Gateway tolerations

Technology Preview

Implemented the capability to specify RADOS Gateway tolerations through the KaaSCephCluster spec using the native Rook way for setting resource requirements for Ceph daemons.

Short names for Kubernetes nodes in Grafana dashboards

Enhanced the Grafana dashboards to display user-friendly short names for Kubernetes nodes, for example, master-0, instead of long name labels such as kaas-node-f736fc1c-3baa-11eb-8262-0242ac110002. This feature provides for consistency with Kubernetes nodes naming in the Container Cloud web UI.

All Grafana dashboards that present node data now have an additional Node identifier drop-down menu. By default, it is set to machine to display short names for Kubernetes nodes. To display Kubernetes node name labels as previously, change this option to node.

Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Enhanced the alert inhibition rules.

  • Reworked a number of alerts to improve alerting efficiency and reduce alert flooding.

  • Removed the inefficient DockerSwarmLeadElectionLoop and SystemDiskErrorsTooHigh alerts.

  • Added the matchers key to the routes configuration. Deprecated the match and match_re keys.

Logs-based metrics in StackLight

Implemented the capability to create custom logs-based metrics that you can use to configure StackLight notifications.

Components versions

The following table lists the components versions of the Cluster release 6.19.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Components versions of the Cluster release 6.19.0

Component

Application/Service

Version

Cluster orchestration Updated

Mirantis Kubernetes Engine

3.3.12 0

Container runtime Updated

Mirantis Container Runtime

20.10.6 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.6.8

LCM

descheduler Removed

n/a

Helm

2.16.11-40

helm-controller Updated

0.3.0-32-gee08c2b8

lcm-ansible Updated

0.10.0-12-g7cd13b6

lcm-agent Updated

0.3.0-32-gee08c2b8

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.4.1-20210707092546

Alertmanager

0.22.2

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch

7.10.2-20210601104922

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20210602174807

Grafana

7.5.4

Grafana Image Renderer

2.0.1

IAM Proxy

6.0.1

Kibana

7.10.2-20210601104911

Metric Collector

0.1-20210219112938

Metricbeat

7.10.2

Patroni

13-2.0p6-20210525081943

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter Updated

0.14.0-20210812120726

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier

0.3-20210702081359

Telegraf

1.9.1-20210225142050

Telemeter

4.4.0-20200424

0

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 6.19.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-409.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20210921155643

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.3.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner Updated

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.6.8


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.10.0-12-g7cd13b6/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-32-gee08c2b8/lcm-agent

Helm charts

descheduler Removed

n/a

managed-lcm-api Updated

https://binary.mirantis.com/core/helm/managed-lcm-api-1.25.6.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.25.6.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.24.6.tgz

Docker images

descheduler Removed

n/a

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-32-gee08c2b8

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-22.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.1.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-36.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-32.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-97.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-110.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.8.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-29.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-12.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-12.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-34.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-201.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-7.tgz

prometheus-es-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-16.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-595.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s Updated

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-19.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-19.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.4.1-20210707092546

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.22.2

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20210601104922

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210602174807

grafana

mirantis.azurecr.io/stacklight/grafana:7.5.4

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana

mirantis.azurecr.io/stacklight/kibana:7.10.2-20210601104911

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20210219112938

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20210812120726

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210702081359

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


6.18.0

The Cluster release 6.18.0 is introduced in the Mirantis Container Cloud release 2.11.0. This Cluster release is based on the Cluster release 5.18.0.

The Cluster release 6.18.0 supports:

  • Mirantis OpenStack for Kubernetes (MOS) 21.4. For details, see MOS Release Notes.

  • Mirantis Kubernetes Engine (MKE) 3.3.6 and the updated version of Mirantis Container Runtime (MCR) 20.10.5. For details, see MKE Release Notes and MCR Release Notes.

  • Kubernetes 1.18.

For the list of addressed issues, refer to the Container Cloud releases 2.10.0 and 2.11.0 sections. For the list of known issues, refer to the Container Cloud release 2.11.0.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 6.18.0.


Graceful MCR upgrade

Implemented a graceful Mirantis Container Runtime (MCR) upgrade from 19.03.14 to 20.10.5 on existing Container Cloud clusters.

MKE logs gathering enhancements

Improved the MKE logs gathering by replacing the default DEBUG logs level with INFO. This change reduces the unnecessary load on the MKE cluster caused by an excessive amount of logs generated with the DEBUG level enabled.

Log verbosity for StackLight components

Implemented the capability to configure the verbosity level of logs produced by all StackLight components or by each component separately.

StackLight log level severity setting in web UI

Implemented the capability to set the default log level severity for all StackLight components as well as set a custom log level severity for specific StackLight components in the Container Cloud web UI. You can update this setting either during a managed cluster creation or during a post-deployment configuration.

Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Added the following alerts:

    • PrometheusMsTeamsDown that raises if prometheus-msteams is down.

    • ServiceNowWebhookReceiverDown that raises if alertmanager-webhook-servicenow is down.

    • SfNotifierDown that raises if the sf-notifier is down.

    • KubeAPICertExpirationMajor, KubeAPICertExpirationWarning, MKEAPICertExpirationMajor, MKEAPICertExpirationWarning that inform on SSL certificates expiration.

    • KubeContainersCPUThrottlingHigh that raises in case of containers CPU throttling.

    • KubeletDown that raises if kubelet is down.

  • Removed the following inefficient alerts:

    • PostgresqlPrimaryDown

    • FileDescriptorUsageCritical

    • KubeCPUOvercommitNamespaces

    • KubeMemOvercommitNamespaces

    • KubeQuotaExceeded

    • ContainerScrapeError

  • Reworked a number of alerts to improve alerting efficiency and reduce alert flooding.

  • Reworked the alert inhibition rules to match the receivers.

  • Updated Alertmanager to v0.22.2.

  • Changed the default behavior of the Salesforce alerts integration. Now, by default, only Critical alerts will be sent to the Salesforce.

StackLight node labeling improvements

Implemented the following improvements for the StackLight node labeling during a cluster creation or post-deployment configuration:

  • Added a verification that a cluster contains minimum 3 worker nodes with the StackLight label for clusters with StackLight deployed in HA mode. This verification applies to cluster deployment and update processes. For details on how to add the StackLight label before upgrade to the latest Cluster releases of Container Cloud 2.11.0, refer to Upgrade managed clusters with StackLight deployed in HA mode.

  • Added a notification about the minimum number of worker nodes with the StackLight label for HA StackLight deployments to the cluster live status description in the Container Cloud web UI.

Caution

Removal of the StackLight label from worker nodes along with removal of worker nodes with StackLight label can cause the StackLight components to become inaccessible. It is important to keep the worker nodes where the StackLight local volumes were provisioned.

Salesforce feed update

Implemented the capability to enable feed update in Salesforce using the feed_enabled parameter. By default, this parameter is set to false to save API calls.

Proxy configuration on existing clusters

Implemented the capability to add or configure proxy on existing Container Cloud managed clusters using the Container Cloud web UI.

Ceph Octopus

Upgraded Ceph from 14.2.19 (Nautilus) to 15.2.13 (Octopus) and Rook from 1.5.9 to 1.6.8.

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added the following procedures:

Hyperconverged Ceph improvement

Technology Preview

Implemented the capability to define Ceph tolerations and resources management through the KaaSCephCluster spec using the native Rook way for setting resource requirements for Ceph daemons.

Ceph cluster status

Improved the MiraCephLog custom resource by adding more information about all Ceph cluster entities and their statuses. The MiraCeph, MiraCephLog statuses and MiraCephLog values are now integrated to KaaSCephCluster.status and can be viewed using the miraCephInfo, shortClusterInfo, and fullClusterInfo fields.

Ceph Manager modules

Implemented the capability to define a list of Ceph Manager modules to enable on the Ceph cluster using the mgr.modules parameter in KaaSCephCluster.

Components versions

The following table lists the components versions of the Cluster release 6.18.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.3.6 0

Container runtime

Mirantis Container Runtime Updated

20.10.5 1

Distributed storage Updated

Ceph

15.2.13 (Octopus)

Rook

1.6.8

LCM

descheduler

0.8.0

Helm

2.16.11-40

helm-controller Updated

0.2.0-399-g85be100f

lcm-ansible Updated

0.9.0-17-g28bc9ce

lcm-agent Updated

0.2.0-399-g85be100f

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta Updated

8.4.1-20210707092546

Alertmanager Updated

0.22.2

Alertmanager Webhook ServiceNow Updated

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch Updated

7.10.2-20210601104922

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd Updated

1.10.2-20210602174807

Grafana

7.5.4

Grafana Image Renderer

2.0.1

IAM Proxy

6.0.1

Kibana Updated

7.10.2-20210601104911

Metric Collector

0.1-20210219112938

Metricbeat

7.10.2

Patroni

13-2.0p6-20210525081943

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.5.1-20210323132924

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter Updated

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier Updated

0.3-20210702081359

Telegraf

1.9.1-20210225142050

Telemeter

4.4.0-20200424

0

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

1
  • For the MCR release highlights, see MCR documentation: MCR release notes.

  • Due to the development limitations, the MCR upgrade to version 19.03.14 on existing Container Cloud clusters is not supported.

Artifacts

This section lists the components artifacts of the Cluster release 6.18.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-368.tgz

Docker images

ceph Updated

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20210807103257

cephcsi Updated

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.3.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner Updated

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook Updated

mirantis.azurecr.io/ceph/rook/ceph:v1.6.8


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.9.0-17-g28bc9ce/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-399-g85be100f/lcm-agent

Helm charts Updated

descheduler

https://binary.mirantis.com/core/helm/descheduler-1.24.6.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.24.6.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.24.6.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.24.6.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-399-g85be100f

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta Updated

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-22.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.1.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-36.tgz

elasticsearch-curator Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-30.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-96.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-108.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.3.tgz

kibana Updated

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-29.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-12.tgz

metricbeat Updated

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-12.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-33.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-188.tgz

prometheus-blackbox-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-7.tgz

prometheus-es-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-10.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-16.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-574.tgz

telegraf-ds Updated

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s Updated

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-29.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-17.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-17.tgz

Docker images

alerta Updated

mirantis.azurecr.io/stacklight/alerta-web:8.4.1-20210707092546

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0.22.2

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch Updated

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20210601104922

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210602174807

grafana

mirantis.azurecr.io/stacklight/grafana:7.5.4

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana Updated

mirantis.azurecr.io/stacklight/kibana:7.10.2-20210601104911

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20210219112938

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20210323132924

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210702081359

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


6.16.0

The Cluster release 6.16.0 is introduced in the Mirantis Container Cloud release 2.9.0. This Cluster release is based on the Cluster release 5.16.0.

The Cluster release 6.16.0 supports:

For the list of addressed issues, refer to the Container Cloud releases 2.8.0 and 2.9.0 sections. For the list of known issues, refer to the Container Cloud release 2.9.0.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 6.16.0.


StackLight components upgrade
  • Upgraded PostgreSQL from version 12 to 13

  • Updated Elasticsearch, Kibana, and Metricbeat from version 7.6.1 to 7.10.2

StackLight notifications to Microsoft Teams

Implemented the capability to enable Alertmanager to send notifications to a Microsoft Teams channel.

StackLight notifications to ServiceNow

Implemented the capability to enable Alertmanager to send notifications to ServiceNow. Also added the ServiceNowAuthFailure alert that will raise in case of failure to authenticate to ServiceNow.

StackLight log collection optimization

Improved the log collection mechanism by optimizing the existing and adding new log parsers for multiple Container Cloud components.

Ceph default configuration options

Enhanced Ceph Controller to automatically specify default configuration options for each Ceph cluster during the Ceph deployment.

Ceph KaaSCephCluster enhancements

Implemented the following Ceph enhancements in the KaaSCephCluster CR:

  • Added the capability to specify the rgw role using the roles parameter

  • Added the following parameters:

    • rookConfig to override the Ceph configuration options

    • useAsFullName to enable the Ceph block pool to use only the name value as a name

    • targetSizeRatio to specify the expected consumption of the Ceph cluster total capacity

    • SSLCert to use a custom TLS certificate to access the Ceph RGW endpoint

    • nodeGroups to easily define specifications for multiple Ceph nodes using lists, grouped by node lists or node labels

    • clients to specify the Ceph clients and their capabilities

Multinetwork configuration for Ceph

Implemented the capability to configure multiple networks for a Ceph cluster.

TLS for Ceph public endpoints

Implemented the capability to configure TLS for a Ceph cluster using a custom ingress rule for Ceph public endpoints.

Ceph RBD mirroring

Implemented the capability to enable RADOS Block Device (RBD) mirroring for Ceph pools.

Components versions

The following table lists the components versions of the Cluster release 6.16.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Distributed storage Updated

Ceph

14.2.19 (Nautilus)

Rook

1.5.9

Container runtime

Mirantis Container Runtime

19.03.14 1

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.3.6 0

LCM

descheduler

0.8.0

Helm

2.16.11-40

helm-controller Updated

0.2.0-349-g4870b7f5

lcm-ansible Updated

0.7.0-9-g30acaae

lcm-agent Updated

0.2.0-349-g4870b7f5

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.4.1-20210312131419

Alertmanager

0.21.0

Alertmanager Webhook ServiceNow New

0.1-20210426114325

Cerebro

0.9.3

Elasticsearch Updated

7.10.2-20210513065347

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd Updated

1.10.2-20210518100631

Grafana Updated

7.5.4

Grafana Image Renderer

2.0.1

IAM Proxy

6.0.1

Kibana Updated

7.10.2-20210513065546

Metric Collector

0.1-20210219112938

Metricbeat Updated

7.10.2

Netchecker Deprecated

1.4.1

Patroni Updated

13-2.0p6-20210525081943

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.5.1-20210323132924

Prometheus MS Teams New

1.4.2

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20200428121305

Prometheus Postgres Exporter Updated

0.9.0

Prometheus Relay

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier

0.3-20210323132354

sf-reporter

0.1-20201216142628

Telegraf

1.9.1-20210225142050

Telemeter

4.4.0-20200424

0

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

1
  • For the MCR release highlights, see MCR documentation: MCR release notes.

  • Due to the development limitations, the MCR upgrade to version 19.03.14 on existing Container Cloud clusters is not supported.

Artifacts

This section lists the components artifacts of the Cluster release 6.16.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-271.tgz

Docker images

ceph Updated

mirantis.azurecr.io/ceph/ceph:v14.2.19

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20210521190241

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.2.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner Updated

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.1

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook Updated

mirantis.azurecr.io/ceph/rook/ceph:v1.5.9


LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible Updated

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.7.0-9-g30acaae/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-349-g4870b7f5/lcm-agent

Helm charts Updated

descheduler

https://binary.mirantis.com/core/helm/descheduler-1.22.4.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.22.4.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.22.4.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.22.4.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-349-g4870b7f5

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta Updated

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-20.tgz

alertmanager-webhook-servicenow New

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.1.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-31.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-2.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-20.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-83.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-102.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

kibana Updated

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-25.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-8.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-8.tgz

netchecker Deprecated

https://binary.mirantis.com/core/helm/netchecker-1.4.1.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-24.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-139.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-4.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-3.tgz

prometheus-msteams New

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-11.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-492.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-20.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-20.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-12.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-12.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.4.1-20210312131419

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.21.0

alertmanager-webhook-servicenow New

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210426114325

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch Updated

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20210513065347

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210518100631

gce-proxy

mirantis.azurecr.io/stacklight/gce-proxy:1.11

grafana Updated

mirantis.azurecr.io/stacklight/grafana:7.5.4

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.2

k8s-netchecker-agent Deprecated

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-agent:2019.1

k8s-netchecker-server Deprecated

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-server:2019.1

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana Updated

mirantis.azurecr.io/stacklight/kibana:7.10.2-20210513065546

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20210219112938

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.10.2

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20210323132924

prometheus-msteams New

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20200428121305

prometheus-postgres-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210323132354

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20201216152628

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


6.14.0

The Cluster release 6.14.0 is introduced in the Mirantis Container Cloud release 2.7.0. This Cluster release is based on the Cluster release 5.14.0.

The Cluster release 6.14.0 supports:

For the list of resolved issues, refer to the Container Cloud releases 2.6.0 and 2.7.0 sections. For the list of known issues, refer to the Container Cloud releases 2.7.0.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 6.14.0.


StackLight logging levels

Significantly enhanced the StackLight log collection mechanism to avoid collecting and keeping an excessive amount of log messages when it is not essential. Now, during or after deployment of StackLight, you can select one of the 9 available logging levels depending on the required severity. The default logging level is INFO.

Remote logging to syslog

Implemented the capability to configure StackLight to forward all logs to an external syslog server. In this case, StackLight will send logs both to the syslog server and to Elasticsearch, which is the default target.

Log collection optimization

Improved the log collection mechanism by optimizing the existing and adding new log parsers for multiple Container Cloud components.

Hyperconverged Ceph

Technology Preview

Implemented the capability to configure Ceph Controller to start pods on the taint nodes and manage the resources of Ceph nodes. Now, when bootstrapping a new management or managed cluster, you can specify requests, limits, or tolerations for Ceph resources. You can also configure resource management for an existing Ceph cluster. However, such approach may cause downtime.

Ceph objectStorage section in KaasCephCluster

Improved user experience by moving the rgw section of the KaasCephCluster CR to a common objectStorage section that now includes all RADOS Gateway configurations of a Ceph cluster. The spec.rgw section is deprecated. However, if you continue using spec.rgw, it will be automatically translated into the new objectStorage.rgw section during the Container Cloud update to 2.6.0.

Ceph maintenance orchestration

Implemented the capability to enable Ceph maintenance mode using the maintenance flag not only during a managed cluster update but also when required. However, Mirantis does not recommend enabling maintenance on production deployments other than during update.

Dedicated network for the Ceph distributed storage traffic

TECHNOLOGY PREVIEW

Added the possibility to configure dedicated networks for the Ceph cluster access and replication traffic using dedicated subnets. Container Cloud automatically configures Ceph to use the addresses from the dedicated subnets after you assign the corresponding addresses to the storage nodes.

Ceph Multisite configuration

Technology Preview

Implemented the capability to enable the Ceph Multisite configuration that allows object storage to replicate its data over multiple Ceph clusters. Using Multisite, such object storage is independent and isolated from another object storage in the cluster.

Ceph troubleshooting documentation

On top of continuous improvements delivered to the existing Container Cloud guides, added the Troubleshoot Ceph section to the Operations Guide. This section now contains a detailed procedure on a failed or accidentally removed Ceph cluster recovery.

Components versions

The following table lists the components versions of the Cluster release 6.14.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Distributed storage

Ceph

14.2.12 (Nautilus)

Rook

1.5.5

Container runtime

Mirantis Container Runtime

19.03.14 1

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.3.6 0

LCM

descheduler

0.8.0

Helm

2.16.11-40

helm-controller Updated

0.2.0-297-g8c87ad67

lcm-ansible Updated

0.5.0-10-gdd307e6

lcm-agent Updated

0.2.0-300-ga874e0df

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta Updated

8.4.1-20210312131419

Alertmanager

0.21.0

Cerebro

0.9.3

Elasticsearch

7.6.1

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd Updated

1.10.2-20210301155825

Grafana Updated

7.3.7

Grafana Image Renderer Updated

2.0.1

IAM Proxy

6.0.1

Kibana

7.6.1

Metric Collector Updated

0.1-20210219112938

Metricbeat

7.6.1

Netchecker

1.4.1

Patroni

12-1.6p3

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter Updated

0.5.1-20210323132924

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20200428121305

Prometheus Postgres Exporter

0.8.0-20201006113956

Prometheus Relay Updated

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier Updated

0.3-20210323132354

sf-reporter

0.1-20201216142628

Telegraf Updated

1.9.1-20210225142050

Telemeter

4.4.0-20200424

0

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

1
  • For the MCR release highlights, see MCR documentation: MCR release notes.

  • Due to the development limitations, the MCR upgrade to version 19.03.14 on existing Container Cloud clusters is not supported.

Artifacts

This section lists the components artifacts of the Cluster release 6.14.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-177.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v14.2.12

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20210322210534

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.2.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.0

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.5.5


LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible Updated

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.5.0-10-gdd307e6/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-300-ga874e0df/lcm-agent

Helm charts

descheduler Updated

https://binary.mirantis.com/core/helm/descheduler-1.19.1.tgz

managed-lcm-api Updated

https://binary.mirantis.com/core/helm/managed-lcm-api-1.19.1.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.19.1.tgz

metrics-server Updated

https://binary.mirantis.com/core/helm/metrics-server-1.19.1.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-297-g8c87ad67

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta Updated

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-15.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-22.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-2.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-17.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-61.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-93.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-20.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-8.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-8.tgz

netchecker

https://binary.mirantis.com/core/helm/netchecker-1.4.1.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-20.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-124.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-4.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-3.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-11.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-438.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-20.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-20.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-12.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-12.tgz

Docker images

alerta Updated

mirantis.azurecr.io/stacklight/alerta-web:8.4.1-20210312131419

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.21.0

alpine-python3-requests

mirantis.azurecr.io/stacklight/alpine-python3-requests:latest-20200618

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.6.1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210301155825

gce-proxy

mirantis.azurecr.io/stacklight/gce-proxy:1.11

grafana Updated

mirantis.azurecr.io/stacklight/grafana:7.3.7

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.2

k8s-netchecker-agent

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-agent:2019.1

k8s-netchecker-server

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-server:2019.1

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana

mirantis.azurecr.io/stacklight/kibana:7.6.1

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20210219112938

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.6.1

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20210323132924

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20200428121305

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.8.0-20201006113956

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210323132354

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20201216142628

spilo

mirantis.azurecr.io/stacklight/spilo:12-1.6p3

telegraf Updated

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225142050

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


6.12.0

The Cluster release 6.12.0 is introduced in the Mirantis Container Cloud release 2.5.0 and is supported by 2.6.0. This Cluster release is based on the Cluster release 5.12.0.

The Cluster release 6.12.0 supports:

  • Mirantis OpenStack for Kubernetes (MOS) 21.1. For details, see MOS Release Notes.

  • Updated versions of Mirantis Kubernetes Engine (MKE) 3.3.6 and Mirantis Container Runtime (MCR) 19.03.14. For details, see MKE Release Notes and MCR Release Notes.

  • Kubernetes 1.18.

For the list of resolved issues, refer to the Container Cloud releases 2.4.0 and 2.5.0 sections. For the list of known issues, refer to the Container Cloud release 2.5.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 6.12.0.


Alert inhibition rules

Implemented alert inhibition rules to provide a clearer view on the cloud status and simplify troubleshooting. Using alert inhibition rules, Alertmanager decreases alert noise by suppressing dependent alerts notifications. The feature is enabled by default. For details, see Alert dependencies.

Integration between Grafana and Kibana

Implemented integration between Grafana and Kibana by adding a View logs in Kibana link to the majority of Grafana dashboards, which allows you to immediately view contextually relevant logs through the Kibana web UI.

Telegraf alert

Implemented the TelegrafGatherErrors alert that raises if Telegraf fails to gather metrics.

Learn more

Telegraf

Configuration of Ironic Telegraf input plugin

Added the ironic.insecure parameter for enabling or disabling the host and chain verification for bare metal Ironic monitoring.

Automatically defined cluster ID

Enhanced StackLight to automatically set clusterId that defines an ID of a Container Cloud cluster. Now, you do not need to set or modify this parameter manually when configuring the sf-notifier and sf-reporter services.

Cerebro support for StackLight

Enhanced StackLight by adding support for Cerebro, a web UI that visualizes health of Elasticsearch clusters and allows for convenient debugging. Cerebro is disabled by default.

Ceph maintenance label

Implemented the maintenance label to set for Ceph during a managed cluster update. This prevents Ceph rebalance leading to data loss during a managed cluster update.

RGW check box in Container Cloud web UI

Implemented the Enable Object Storage checkbox in the Container Cloud web UI to allow enabling a single-instance RGW Object Storage when creating a Ceph cluster as described in Add a Ceph cluster.

Ceph RGW HA

Enhanced Ceph to support RADOS Gateway (RGW) high availability. Now, you can run multiple instances of Ceph RGW in active/active mode.

StackLight proxy

Added proxy support for Alertmanager, Metric collector, Salesforce notifier and reporter, and Telemeter client. Now, these StackLight components automatically use the same proxy that is configured for Container Cloud clusters.

Note

Proxy handles only the HTTP and HTTPS traffic. Therefore, for clusters with limited or no Internet access, it is not possible to set up Alertmanager email notifications, which use SMTP, when proxy is used.

Note

Due to a limitation, StackLight fails to integrate with an external proxy with authentication handled by a proxy server. In such cases, the proxy server ignores the HTTP Authorization header for basic authentication passed by Prometheus Alertmanager. Therefore, use proxies without authentication or with authentication handled by a reverse proxy.

Components versions

The following table lists the components versions of the Cluster release 6.12.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Distributed storage

Ceph

14.2.12 (Nautilus)

Rook

1.5.5

Container runtime

Mirantis Container Runtime

19.03.14 1

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.3.6 0

LCM

descheduler

0.8.0

Helm

2.16.11-40

helm-controller Updated

0.2.0-258-ga2d72294

lcm-ansible Updated

0.3.0-10-g7c2a87e

lcm-agent Updated

0.2.0-258-ga2d72294

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.0.2-20201014133832

Alertmanager

0.21.0

Cerebro New

0.9.3

Elasticsearch

7.6.1

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20200609085335

Grafana

7.1.5

Grafana Image Renderer

2.0.0

IAM Proxy

6.0.1

Kibana

7.6.1

Metric Collector

0.1-20201222100033

Metricbeat

7.6.1

Netchecker

1.4.1

Patroni

12-1.6p3

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.5.1-20201002144823

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20200428121305

Prometheus Postgres Exporter

0.8.0-20201006113956

Prometheus Relay

0.3-20200519054052

Pushgateway

1.2.0

sf-notifier

0.3-20201216142028

sf-reporter

0.1-20201216142628

Telegraf

1.9.1-20201222194740

Telemeter

4.4.0-20200424

0

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

1
  • For the MCR release highlights, see MCR documentation: MCR release notes.

  • Due to the development limitations, the MCR upgrade to version 19.03.14 on existing Container Cloud clusters is not supported.

Artifacts

This section lists the components artifacts of the Cluster release 6.12.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-127.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v14.2.12

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20210201202754

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.2.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.0

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.5.5


LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible Updated

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.3.0-10-g7c2a87e/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-258-ga2d72294/lcm-agent

Helm charts

descheduler Updated

https://binary.mirantis.com/core/helm/descheduler-1.17.4.tgz

managed-lcm-api Updated

https://binary.mirantis.com/core/helm/managed-lcm-api-1.17.4.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.17.4.tgz

metrics-server Updated

https://binary.mirantis.com/core/helm/metrics-server-1.17.4.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-258-ga2d72294

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-13.tgz

cerebro New

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-22.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-2.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-15.tgz

fluentd-elasticsearch

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-33.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-89.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-20.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-8.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-8.tgz

netchecker

https://binary.mirantis.com/core/helm/netchecker-1.4.1.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-19.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-114.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-4.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-3.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-11.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-401.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-20.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-20.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-12.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-12.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.0.2-20201014133832

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.21.0

alpine-python3-requests

mirantis.azurecr.io/stacklight/alpine-python3-requests:latest-20200618

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro New

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.6.1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20200609085335

gce-proxy

mirantis.azurecr.io/stacklight/gce-proxy:1.11

grafana

mirantis.azurecr.io/stacklight/grafana:7.1.5

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.0

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.2

k8s-netchecker-agent

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-agent:2019.1

k8s-netchecker-server

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-server:2019.1

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:0.1.178

kibana

mirantis.azurecr.io/stacklight/kibana:7.6.1

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20201222100033

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.6.1

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20201002144823

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20200428121305

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.8.0-20201006113956

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20200519054052

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20201216142028

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20201216142628

spilo

mirantis.azurecr.io/stacklight/spilo:12-1.6p3

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20201222194740

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq New

mirantis.azurecr.io/stacklight/yq:v4.2.0

6.10.0

The Cluster release 6.10.0 is introduced in the Mirantis Container Cloud release 2.3.0 and supports:

  • Mirantis OpenStack for Kubernetes (MOS) Ussuri Update. For details, see MOS Release Notes.

  • Updated versions of Mirantis Kubernetes Engine 3.3.4 and Mirantis Container Runtime 19.03.13. For details, see MKE Release Notes and MCR Release Notes.

  • Kubernetes 1.18.

For the list of known and resolved issues, refer to the Container Cloud release 2.3.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 6.10.0.


Ceph Object Storage support

Enhanced Ceph to support RADOS Gateway (RGW) Object Storage.

Ceph state verification

Implemented the capability to obtain detailed information on the Ceph cluster state, including Ceph logs, Ceph OSDs state, and a list of Ceph pools.

Components versions

The following table lists the components versions of the Cluster release 6.10.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Distributed storage

Ceph

14.2.11 (Nautilus)

Rook

1.4.4

Container runtime

Mirantis Container Runtime Updated

19.03.13 1

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.3.4 0

LCM

descheduler

0.8.0

Helm Updated

2.16.11-40

helm-controller Updated

0.2.0-221-g32bd5f56

lcm-ansible Updated

0.2.0-381-g720ec96

lcm-agent Updated

0.2.0-221-g32bd5f56

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.0.2-20201014133832

Alertmanager

0.21.0

Elasticsearch

7.6.1

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20200609085335

Grafana

7.1.5

Grafana Image Renderer

2.0.0

IAM Proxy

6.0.1

Kibana

7.6.1

Metric Collector Updated

0.1-20201120155524

Metricbeat

7.6.1

Netchecker

1.4.1

Patroni

12-1.6p3

Prometheus Updated

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.5.1-20201002144823

Prometheus libvirt Exporter

0.1-20200610164751

Prometheus Memcached Exporter

0.5.0

Prometheus MySQL Exporter

0.11.0

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20200428121305

Prometheus Postgres Exporter

0.8.0-20201006113956

Prometheus RabbitMQ Exporter Updated

v1.0.0-RC7.1

Prometheus Relay

0.3-20200519054052

Pushgateway

1.2.0

sf-notifier

0.3-20201001081256

sf-reporter

0.1-20200219140217

Telegraf Updated

1.9.1-20201120081248

Telemeter

4.4.0-20200424

0

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

1
  • For the MCR release highlights, see MCR documentation: MCR release notes.

  • Due to the development limitations, the MCR upgrade to version 19.03.14 on existing Container Cloud clusters is not supported.

Artifacts

This section lists the components artifacts of the Cluster release 6.10.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-95.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v14.2.11

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20201215142221

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.1.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v1.2.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v1.6.0

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v2.1.1

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v2.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.4.4


LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible Updated

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.2.0-381-g720ec96/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-221-g32bd5f56/lcm-agent

Helm charts

descheduler Updated

https://binary.mirantis.com/core/helm/descheduler-1.15.1.tgz

managed-lcm-api New

https://binary.mirantis.com/core/helm/managed-lcm-api-1.15.1.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.15.1.tgz

metrics-server Updated

https://binary.mirantis.com/core/helm/metrics-server-1.15.1.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm Updated

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-221-g32bd5f56

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-13.tgz

elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-22.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-2.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-15.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-33.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-74.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-20.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-5.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-8.tgz

netchecker

https://binary.mirantis.com/core/helm/netchecker-1.4.1.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-17.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-102.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-3.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-3.tgz

prometheus-libvirt-exporter

https://binary.mirantis.com/stacklight/heprometheus-libvirt-exporter-0.1.0-mcp-2.tgz

prometheus-memcached-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-memcached-exporter-0.1.0-mcp-1.tgz

prometheus-mysql-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-mysql-exporter-0.3.2-mcp-1.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

prometheus-rabbitmq-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-rabbitmq-exporter-0.4.1-mcp-1.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-9.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-8.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-354.tgz

telegraf-ds Updated

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-19.tgz

telegraf-s Updated

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-19.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-11.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-11.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.0.2-20201014133832

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.21.0

alpine-python3-requests

mirantis.azurecr.io/stacklight/alpine-python3-requests:latest-20200618

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.6.1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20200609085335

gce-proxy

mirantis.azurecr.io/stacklight/gce-proxy:1.11

grafana

mirantis.azurecr.io/stacklight/grafana:7.1.5

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.0

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.2

k8s-netchecker-agent

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-agent:2019.1

k8s-netchecker-server

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-server:2019.1

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:0.1.178

kibana

mirantis.azurecr.io/stacklight/kibana:7.6.1

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20201120155524

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.6.1

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20201002144823

prometheus-libvirt-exporter

mirantis.azurecr.io/stacklight/libvirt-exporter:v0.1-20200610164751

prometheus-memcached-exporter

mirantis.azurecr.io/stacklight/memcached-exporter:v0.5.0

prometheus-mysql-exporter

mirantis.azurecr.io/stacklight/mysqld-exporter:v0.11.0

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20200428121305

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.8.0-20201006113956

prometheus-rabbitmq-exporter Updated

mirantis.azurecr.io/stacklight/rabbitmq-exporter:v1.0.0-RC7.1

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20200519054052

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20201001081256

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20200219140217

spilo

mirantis.azurecr.io/stacklight/spilo:12-1.6p3

telegraf Updated

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20201120081248

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

6.8.1

The Cluster release 6.8.1 is introduced in the Mirantis Container Cloud release 2.2.0. This Cluster release is based on the Cluster release 5.8.0 and the main difference is support of the Mirantis OpenStack for Kubernetes (MOS) product.

For details about MOS, see MOS Release Notes.

For details about the Cluster release 5.8.0, refer to the 5.8.0 section.

5.x series

This section outlines release notes for the unsupported Cluster releases of the 5.x series.

5.22.0

This section outlines release notes for the Cluster release 5.22.0 that is introduced in the Mirantis Container Cloud release 2.15.0. This Cluster release supports Mirantis Container Runtime 20.10.8 and Mirantis Kubernetes Engine 3.3.13 with Kubernetes 1.18.

For the list of known and resolved issues, refer to the Container Cloud release 2.15.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 5.22.0.


MCR version update

Updated the Mirantis Container Runtime (MCR) version from 20.10.6 to 20.10.8 for the Container Cloud management, regional, and managed clusters on all supported cloud providers.

Mirantis Container Cloud alerts

Implemented the MCCLicenseExpirationCritical and MCCLicenseExpirationMajor alerts that notify about Mirantis Container Cloud license expiration in less than 10 and 30 days.

Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Enhanced Kubernetes applications alerting:

    • Reworked the Kubernetes applications alerts to minimize flapping, avoid firing during pod rescheduling, and to detect crash looping for pods that restart less frequently.

    • Added the KubeDeploymentOutage, KubeStatefulSetOutage, and KubeDaemonSetOutage alerts.

    • Removed the redundant KubeJobCompletion alert.

    • Enhanced the alert inhibition rules to reduce alert flooding.

    • Improved alert descriptions.

  • Split TelemeterClientFederationFailed into TelemeterClientFailed and TelemeterClientHAFailed to separate alerts depending on the HA mode disabled or enabled.

  • Updated the description for DockerSwarmNodeFlapping.

Node Exporter collectors

Disabled unused Node Exporter collectors and implemented the capability to manually enable needed collectors using the nodeExporter.extraCollectorsEnabled parameter. Only the following collectors are now enabled by default in StackLight:

  • arp

  • conntrack

  • cpu

  • diskstats

  • entropy

  • filefd

  • filesystem

  • hwmon

  • loadavg

  • meminfo

  • netdev

  • netstat

  • nfs

  • stat

  • sockstat

  • textfile

  • time

  • timex

  • uname

  • vmstat

Automated Ceph LCM

Implemented full support for automated Ceph LCM operations using the KaaSCephOperationRequest CR, such as addition or removal of Ceph OSDs and nodes, as well as replacement of failed Ceph OSDs or nodes.

Learn more

Automated Ceph LCM

Ceph CSI provisioner tolerations and node affinity

Implemented the capability to specify Container Storage Interface (CSI) provisioner tolerations and node affinity for different Rook resources. Added support for the all and mds keys in toleration rules.

Ceph KaaSCephCluster.status enhancement

Extended the fullClusterInfo section of the KaaSCephCluster.status resource with the following fields:

  • cephDetails - contains verbose details of a Ceph cluster state

  • cephCSIPluginDaemonsStatus - contains details on all Ceph CSIs

Ceph Shared File System (CephFS)

TechPreview

Implemented the capability to enable the Ceph Shared File System, or CephFS, to create read/write shared file system Persistent Volumes (PVs).

Components versions

The following table lists the components versions of the Cluster release 5.22.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Components versions of the Cluster release 5.22.0

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.3.13 0

Container runtime

Mirantis Container Runtime Updated

20.10.8 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.7.6

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-132-g83a348fa

lcm-ansible Updated

0.13.0-26-gad73ff7

lcm-agent Updated

0.3.0-132-g83a348fa

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.5.0-20211108051042

Alertmanager Updated

0.23.0

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch

7.10.2-2021110210112

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20210915110132

Grafana Updated

8.2.7

Grafana Image Renderer

3.2.1

IAM Proxy

6.0.1

Kibana

7.10.2-20211101074638

Metric Collector

0.1-20211109121134

Metricbeat

7.10.2-20211103140113

Patroni

13-2.0p6-20210525081943

Prometheus Updated

2.31.1

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.14.0-20210812120726

Prometheus MS Teams

1.4.2

Prometheus Node Exporter Updated

1.2.2

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

Pushgateway Removed

n/a

sf-notifier

0.3-20210930112115

sf-reporter

0.1-20210607111404

Telegraf

1.9.1-20210225142050

1.20.0-20210927090119

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 5.22.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-606.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20220110132813

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.7.6


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.13.0-26-gad73ff7/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-132-g83a348fa/lcm-agent

Helm charts Updated

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.28.7.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.28.7.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.28.7.tgz

Docker images

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-132-g83a348fa

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta Updated

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-25.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-1.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-37.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-32.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-115.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-121.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.10.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-30.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-3.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-12.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-36.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-214.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-7.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-1.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-1.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.3.1.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-1.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-1.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0.23.0

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20211102101126

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210915110132

grafana Updated

mirantis.azurecr.io/stacklight/grafana:8.2.7

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.2.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana

mirantis.azurecr.io/stacklight/kibana:7.10.2-20211101074638

kube-state-metrics Updated

mirantis.azurecr.io/stacklight/kube-state-metrics:v2.2.4

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20211109121134

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20211103140113

node-exporter Updated

mirantis.azurecr.io/stacklight/node-exporter:v1.2.2

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v2.31.1

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20210812120726

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway Removed

n/a

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210930112115

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20210607111404

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

mirantis.azurecr.io/stacklight/telegraf:1.20.0-20210927090119

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


5.21.0

This section outlines release notes for the Cluster release 5.21.0 that is introduced in the Mirantis Container Cloud release 2.14.0. This Cluster release supports Mirantis Container Runtime 20.10.6 and Mirantis Kubernetes Engine 3.3.12 with Kubernetes 1.18.

For the list of known and resolved issues, refer to the Container Cloud release 2.14.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 5.21.0.


MKE version update from 3.3.12 to 3.3.13

Updated the Mirantis Kubernetes Engine version from 3.3.12 to 3.3.13 for the Container Cloud management, regional, and managed clusters. Also, added support for attachment of existing MKE 3.3.13 clusters.

Network interfaces monitoring

Limited the number of monitored network interfaces to prevent extended Prometheus RAM consumption in big clusters. By default, Prometheus Node Exporter now only collects information of a basic set of interfaces, both host and container. If required you can edit the list of excluded devices as needed.

Custom Prometheus recording rules

Implemented the capability to define custom Prometheus recording rules through the prometheusServer.customRecordingRules parameter in the StackLight Helm chart. Overriding of existing recording rules is not supported.

Syslog packet size configuration

Implemented the capability to configure packet size for the syslog logging output. If remote logging to syslog is enabled in StackLight, use the logging.syslog.packetSize parameter in the StackLight Helm chart to configure the packet size.

Prometheus Relay configuration

Implemented the capability to configure the Prometheus Relay client timeout and response size limit through the prometheusRelay.clientTimeout and prometheusRelay.responseLimitBytes parameters in the StackLight Helm chart.

Ceph networks validation

Implemented additional validation of networks specified in spec.cephClusterSpec.network.publicNet and spec.cephClusterSpec.network.clusterNet and prohibited the use of the 0.0.0.0/0 CIDR. Now, the bare metal provider automatically translates the 0.0.0.0/0 network range to the default LCM IPAM subnet if it exists.

You can now also add corresponding labels for the bare metal IPAM subnets when configuring the Ceph cluster during the management cluster deployment.

Enhanced Ceph architecture

To improve debugging and log reading, separated Ceph Controller, Ceph Status Controller, and Ceph Request Controller, which used to run in one pod, into three different deployments.

Automated Ceph OSD removal

TechPreview

Implemented the KaaSCephOperationRequest CR that provides LCM operations for Ceph OSDs and nodes by automatically creating separate CephOsdRemoveRequest requests. It allows for automated removal of healthy or non-healthy Ceph OSDs from a Ceph cluster.

Due to the Technology Preview status of the feature, Mirantis recommends following Remove Ceph OSD manually for Ceph OSDs removal.

Learn more

Manage Ceph

Components versions

The following table lists the components versions of the Cluster release 5.21.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Components versions of the Cluster release 5.21.0

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.3.13 0

Container runtime

Mirantis Container Runtime

20.10.6 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook Updated

1.7.6

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-104-gb7f5e8d8

lcm-ansible Updated

0.12.0-6-g5329efe

lcm-agent Updated

0.3.0-104-gb7f5e8d8

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta Updated

8.5.0-20211108051042

Alertmanager

0.22.2

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch Updated

7.10.2-2021110210112

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20210915110132

Grafana Updated

8.2.2

Grafana Image Renderer Updated

3.2.1

IAM Proxy

6.0.1

Kibana Updated

7.10.2-20211101074638

Metric Collector Updated

0.1-20211109121134

Metricbeat Updated

7.10.2-20211103140113

Patroni

13-2.0p6-20210525081943

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.14.0-20210812120726

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier

0.3-20210930112115

sf-reporter

0.1-20210607111404

Telegraf

1.9.1-20210225142050

1.20.0-20210927090119

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 5.21.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-526.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20211109132703

cephcsi Updated

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.4.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook Updated

mirantis.azurecr.io/ceph/rook/ceph:v1.7.6


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.12.0-6-g5329efe/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-104-gb7f5e8d8/lcm-agent

Helm charts

managed-lcm-api Updated

https://binary.mirantis.com/core/helm/managed-lcm-api-1.27.6.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.27.6.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.27.6.tgz

Docker images

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-104-gb7f5e8d8

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-22.tgz

alertmanager-webhook-servicenow Updated

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.2.0-mcp-1.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-37.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-32.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-112.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-115.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.9.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-30.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.3.0-mcp-1.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-12.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-36.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-208.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-7.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.2.0-mcp-1.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.2.0-mcp-1.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.2.0-mcp-1.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.2.5.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.2.0-mcp-1.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.2.0-mcp-1.tgz

Docker images

alerta Updated

mirantis.azurecr.io/stacklight/alerta-web:8.5.0-20211108051042

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.22.2

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch Updated

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20211102101126

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210915110132

grafana Updated

mirantis.azurecr.io/stacklight/grafana:8.2.2

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:3.2.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana Updated

mirantis.azurecr.io/stacklight/kibana:7.10.2-20211101074638

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20211109121134

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.10.2-20211103140113

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20210812120726

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210930112115

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20210607111404

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

mirantis.azurecr.io/stacklight/telegraf:1.20.0-20210927090119

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


5.20.0

This section outlines release notes for the Cluster release 5.20.0 that is introduced in the Mirantis Container Cloud release 2.13.0. This Cluster release supports Mirantis Container Runtime 20.10.6 and Mirantis Kubernetes Engine 3.3.12 with Kubernetes 1.18.

For the list of known and resolved issues, refer to the Container Cloud release 2.13.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 5.20.0.


Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Implemented per-service *TargetDown and *TargetsOutage alerts that raise if one or all Prometheus targets are down.

  • Enhanced the alert inhibition rules to reduce alert flooding.

  • Removed the following inefficient alerts:

    • TargetDown

    • TargetFlapping

    • KubeletDown

    • ServiceNowWebhookReceiverDown

    • SfNotifierDown

    • PrometheusMsTeamsDown

Components versions

The following table lists the components versions of the Cluster release 5.20.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Components versions of the Cluster release 5.20.0

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.3.12 0

Container runtime

Mirantis Container Runtime

20.10.6 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.6.8

LCM

Helm

2.16.11-40

helm-controller Updated

0.3.0-67-g25ab9f1a

lcm-ansible Updated

0.11.0-6-gbfce76e

lcm-agent Updated

0.3.0-67-g25ab9f1a

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.4.1-20210707092546

Alertmanager

0.22.2

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch

7.10.2-20210601104922

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd Updated

1.10.2-20210915110132

Grafana Updated

8.1.2

Grafana Image Renderer

2.0.1

IAM Proxy

6.0.1

Kibana

7.10.2-20210601104911

Metric Collector

0.1-20210219112938

Metricbeat

7.10.2

Patroni

13-2.0p6-20210525081943

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.14.0-20210812120726

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier Updated

0.3-20210930112115

sf-reporter New

0.1-20210607111404

Telegraf

1.9.1-20210225142050

New 1.20.0-20210927090119

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 5.20.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-427.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20211013104642

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.3.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.6.8


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.11.0-6-gbfce76e/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-67-g25ab9f1a/lcm-agent

Helm charts

managed-lcm-api Updated

https://binary.mirantis.com/core/helm/managed-lcm-api-1.26.6.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.26.6.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.24.6.tgz

Docker images

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-67-g25ab9f1a

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-22.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.1.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-37.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-32.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-105.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-110.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.9.tgz

kibana Updated

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-30.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-12.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-12.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-34.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-202.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-7.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-16.tgz

sf-reporter New

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-13.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-807.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-19.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-19.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.4.1-20210707092546

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.22.2

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20210601104922

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210915110132

grafana Updated

mirantis.azurecr.io/stacklight/grafana:8.1.2

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana

mirantis.azurecr.io/stacklight/kibana:7.10.2-20210601104911

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20210219112938

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20210812120726

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210930112115

sf-reporter New

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20210607111404

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

New mirantis.azurecr.io/stacklight/telegraf:1.20.0-20210927090119

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


5.19.0

This section outlines release notes for the Cluster release 5.19.0 that is introduced in the Mirantis Container Cloud release 2.12.0. This Cluster release supports Mirantis Container Runtime 20.10.6 and Mirantis Kubernetes Engine 3.3.12 with Kubernetes 1.18.

For the list of known and resolved issues, refer to the Container Cloud release 2.12.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 5.19.0.


MCR and MKE versions update

Updated the Mirantis Container Runtime (MCR) version from 20.10.5 to 20.10.6 and Mirantis Kubernetes Engine version from 3.3.6 to 3.3.12 for the Container Cloud management, regional, and managed clusters. Also, added support for attachment of existing MKE clusters 3.3.7-3.3.12 and 3.4.1-3.4.5.

For the MCR release highlights and components versions, see MCR documentation: MCR release notes and MKE documentation: MKE release notes.

Ceph maintenance improvement

Integrated the Ceph maintenance to the common upgrade procedure. Now, the maintenance flag function is set up programmatically and the flag itself is deprecated.

Ceph RADOS Gateway tolerations

Technology Preview

Implemented the capability to specify RADOS Gateway tolerations through the KaaSCephCluster spec using the native Rook way for setting resource requirements for Ceph daemons.

Short names for Kubernetes nodes in Grafana dashboards

Enhanced the Grafana dashboards to display user-friendly short names for Kubernetes nodes, for example, master-0, instead of long name labels such as kaas-node-f736fc1c-3baa-11eb-8262-0242ac110002. This feature provides for consistency with Kubernetes nodes naming in the Container Cloud web UI.

All Grafana dashboards that present node data now have an additional Node identifier drop-down menu. By default, it is set to machine to display short names for Kubernetes nodes. To display Kubernetes node name labels as previously, change this option to node.

Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Enhanced the alert inhibition rules.

  • Reworked a number of alerts to improve alerting efficiency and reduce alert flooding.

  • Removed the inefficient DockerSwarmLeadElectionLoop and SystemDiskErrorsTooHigh alerts.

  • Added the matchers key to the routes configuration. Deprecated the match and match_re keys.

Logs-based metrics in StackLight

Implemented the capability to create custom logs-based metrics that you can use to configure StackLight notifications.

Components versions

The following table lists the components versions of the Cluster release 5.19.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Components versions of the Cluster release 5.19.0

Component

Application/Service

Version

Cluster orchestration Updated

Mirantis Kubernetes Engine

3.3.12 0

Container runtime Updated

Mirantis Container Runtime

20.10.6 1

Distributed storage

Ceph

15.2.13 (Octopus)

Rook

1.6.8

LCM

descheduler Removed

n/a

Helm

2.16.11-40

helm-controller Updated

0.3.0-32-gee08c2b8

lcm-ansible Updated

0.10.0-12-g7cd13b6

lcm-agent Updated

0.3.0-32-gee08c2b8

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.4.1-20210707092546

Alertmanager

0.22.2

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch

7.10.2-20210601104922

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20210602174807

Grafana

7.5.4

Grafana Image Renderer

2.0.1

IAM Proxy

6.0.1

Kibana

7.10.2-20210601104911

Metric Collector

0.1-20210219112938

Metricbeat

7.10.2

Patroni

13-2.0p6-20210525081943

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter Updated

0.14.0-20210812120726

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier

0.3-20210702081359

Telegraf

1.9.1-20210225142050

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 5.19.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-409.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20210921155643

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.3.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner Updated

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.6.8


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.10.0-12-g7cd13b6/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.3.0-32-gee08c2b8/lcm-agent

Helm charts

descheduler Removed

n/a

managed-lcm-api Updated

https://binary.mirantis.com/core/helm/managed-lcm-api-1.25.6.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.25.6.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.24.6.tgz

Docker images

descheduler Removed

n/a

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.3.0-32-gee08c2b8

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-22.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.1.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-36.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-32.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-97.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-110.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.8.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-29.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-12.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-12.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-34.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-201.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-7.tgz

prometheus-es-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-11.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-16.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-595.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s Updated

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-30.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-19.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-19.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.4.1-20210707092546

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.22.2

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20210601104922

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210602174807

grafana

mirantis.azurecr.io/stacklight/grafana:7.5.4

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana

mirantis.azurecr.io/stacklight/kibana:7.10.2-20210601104911

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20210219112938

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.14.0-20210812120726

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210702081359

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


5.18.0

This section outlines release notes for the Cluster release 5.18.0 that is introduced in the Mirantis Container Cloud release 2.11.0. This Cluster release supports Mirantis Container Runtime 20.10.5 and Mirantis Kubernetes Engine 3.3.6 with Kubernetes 1.18.

For the list of known and resolved issues, refer to the Container Cloud release 2.11.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 5.18.0.


Ceph Octopus

Upgraded Ceph from 14.2.19 (Nautilus) to 15.2.13 (Octopus) and Rook from 1.5.9 to 1.6.8.

Hyperconverged Ceph improvement

Technology Preview

Implemented the capability to define Ceph tolerations and resources management through the KaaSCephCluster spec using the native Rook way for setting resource requirements for Ceph daemons.

Ceph cluster status

Improved the MiraCephLog custom resource by adding more information about all Ceph cluster entities and their statuses. The MiraCeph, MiraCephLog statuses and MiraCephLog values are now integrated to KaaSCephCluster.status and can be viewed using the miraCephInfo, shortClusterInfo, and fullClusterInfo fields.

Ceph Manager modules

Implemented the capability to define a list of Ceph Manager modules to enable on the Ceph cluster using the mgr.modules parameter in KaaSCephCluster.

StackLight node labeling improvements

Implemented the following improvements for the StackLight node labeling during a cluster creation or post-deployment configuration:

  • Added a verification that a cluster contains minimum 3 worker nodes with the StackLight label for clusters with StackLight deployed in HA mode. This verification applies to cluster deployment and update processes. For details on how to add the StackLight label before upgrade to the latest Cluster releases of Container Cloud 2.11.0, refer to Upgrade managed clusters with StackLight deployed in HA mode.

  • Added a notification about the minimum number of worker nodes with the StackLight label for HA StackLight deployments to the cluster live status description in the Container Cloud web UI.

Caution

Removal of the StackLight label from worker nodes along with removal of worker nodes with StackLight label can cause the StackLight components to become inaccessible. It is important to keep the worker nodes where the StackLight local volumes were provisioned.

StackLight log level severity setting in web UI

Implemented the capability to set the default log level severity for all StackLight components as well as set a custom log level severity for specific StackLight components in the Container Cloud web UI. You can update this setting either during a managed cluster creation or during a post-deployment configuration.

Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Added the following alerts:

    • KubeContainersCPUThrottlingHigh that raises in case of containers CPU throttling.

    • KubeletDown that raises if kubelet is down.

  • Reworked a number of alerts to improve alerting efficiency and reduce alert flooding.

  • Reworked the alert inhibition rules.

  • Removed the following inefficient alerts:

    • FileDescriptorUsageCritical

    • KubeCPUOvercommitNamespaces

    • KubeMemOvercommitNamespaces

    • KubeQuotaExceeded

    • ContainerScrapeError

Salesforce feed update

Implemented the capability to enable feed update in Salesforce using the feed_enabled parameter. By default, this parameter is set to false to save API calls.

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added a procedure on how to manually remove a Ceph OSD from a Ceph cluster.

Components versions

The following table lists the components versions of the Cluster release 5.18.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Components versions of the Cluster release 5.18.0

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.3.6 0

Container runtime

Mirantis Container Runtime

20.10.5 1

Distributed storage Updated

Ceph

15.2.13 (Octopus)

Rook

1.6.8

LCM

descheduler

0.8.0

Helm

2.16.11-40

helm-controller Updated

0.2.0-399-g85be100f

lcm-ansible Updated

0.9.0-17-g28bc9ce

lcm-agent Updated

0.2.0-399-g85be100f

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta Updated

8.4.1-20210707092546

Alertmanager

0.22.2

Alertmanager Webhook ServiceNow

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch

7.10.2-20210601104922

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20210602174807

Grafana

7.5.4

Grafana Image Renderer

2.0.1

IAM Proxy

6.0.1

Kibana

7.10.2-20210601104911

Metric Collector

0.1-20210219112938

Metricbeat

7.10.2

Patroni

13-2.0p6-20210525081943

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.5.1-20210323132924

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter Updated

0.1-20210708141736

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier Updated

0.3-20210702081359

Telegraf

1.9.1-20210225142050

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 5.18.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-368.tgz

Docker images

ceph Updated

mirantis.azurecr.io/ceph/ceph:v15.2.13

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20210807103257

cephcsi Updated

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.3.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner Updated

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.2

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook Updated

mirantis.azurecr.io/ceph/rook/ceph:v1.6.8


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.9.0-17-g28bc9ce/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-399-g85be100f/lcm-agent

Helm charts Updated

descheduler

https://binary.mirantis.com/core/helm/descheduler-1.24.6.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.24.6.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.24.6.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.24.6.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-399-g85be100f

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-22.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.1.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-36.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-30.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-96.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-108.tgz

iam-proxy Updated

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.3.tgz

kibana Updated

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-29.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-12.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-12.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-33.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-188.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-7.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-10.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-16.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-574.tgz

telegraf-ds Updated

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-29.tgz

telegraf-s Updated

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-29.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-17.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-17.tgz

Docker images

alerta Updated

mirantis.azurecr.io/stacklight/alerta-web:8.4.1-20210707092546

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.22.2

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20210601104922

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210602174807

grafana

mirantis.azurecr.io/stacklight/grafana:7.5.4

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.1

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.19.13

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana

mirantis.azurecr.io/stacklight/kibana:7.10.2-20210601104911

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20210219112938

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20210323132924

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20210708141736

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210702081359

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


5.17.0

This section outlines release notes for the Cluster release 5.17.0 that is introduced in the Mirantis Container Cloud release 2.10.0. This Cluster release introduces support for the updated version of Mirantis Container Runtime 20.10.5 and supports Mirantis Kubernetes Engine 3.3.6 with Kubernetes 1.18.

For the list of known and resolved issues, refer to the Container Cloud release 2.10.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 5.17.0.


Graceful MCR upgrade

Implemented a graceful Mirantis Container Runtime (MCR) upgrade from 19.03.14 to 20.10.5 on existing Container Cloud clusters.

MKE logs gathering enhancements

Improved the MKE logs gathering by replacing the default DEBUG logs level with INFO. This change reduces the unnecessary load on the MKE cluster caused by an excessive amount of logs generated with the DEBUG level enabled.

Log verbosity for StackLight components

Implemented the capability to configure the verbosity level of logs produced by all StackLight components or by each component separately.

Improvements to StackLight alerting

Implemented the following improvements to StackLight alerting:

  • Added the following alerts:

    • PrometheusMsTeamsDown that raises if prometheus-msteams is down.

    • ServiceNowWebhookReceiverDown that raises if alertmanager-webhook-servicenow is down.

    • SfNotifierDown that raises if the sf-notifier is down.

    • KubeAPICertExpirationMajor, KubeAPICertExpirationWarning, MKEAPICertExpirationMajor, MKEAPICertExpirationWarning that inform on SSL certificates expiration.

  • Removed the inefficient PostgresqlPrimaryDown alert.

  • Reworked a number of alerts to improve alerting efficiency and reduce alert flooding.

  • Reworked the alert inhibition rules to match the receivers.

  • Updated Alertmanager to v0.22.2.

  • Changed the default behavior of the Salesforce alerts integration. Now, by default, only Critical alerts will be sent to the Salesforce.

Proxy configuration on existing clusters

Implemented the capability to add or configure proxy on existing Container Cloud managed clusters using the Container Cloud web UI.

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added a procedure on how to move a Ceph Monitor daemon to another node.

Components versions

The following table lists the components versions of the Cluster release 5.17.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Components versions of the Cluster release 5.17.0

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.3.6 0

Container runtime

Mirantis Container Runtime Updated

20.10.5 1

Distributed storage

Ceph

14.2.19 (Nautilus)

Rook

1.5.9

LCM

descheduler

0.8.0

Helm

2.16.11-40

helm-controller Updated

0.2.0-372-g7e042f4d

lcm-ansible Updated

0.8.0-17-g63ec424

lcm-agent Updated

0.2.0-373-gae771bb4

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.4.1-20210312131419

Alertmanager Updated

0.22.2

Alertmanager Webhook ServiceNow Updated

0.1-20210601141858

Cerebro

0.9.3

Elasticsearch Updated

7.10.2-20210601104922

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd Updated

1.10.2-20210602174807

Grafana

7.5.4

Grafana Image Renderer

2.0.1

IAM Proxy

6.0.1

Kibana Updated

7.10.2-20210601104911

Metric Collector

0.1-20210219112938

Metricbeat

7.10.2

Patroni

13-2.0p6-20210525081943

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.5.1-20210323132924

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20200428121305

Prometheus Postgres Exporter

0.9.0

Prometheus Relay

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier Updated

0.3-20210617140951

sf-reporter Updated

0.1-20210607111404

Telegraf

1.9.1-20210225142050

Telemeter

4.4.0-20200424

0

For the Mirantis Kubernetes Engine (MKE) release highlights and components versions, see MKE documentation: MKE release notes.

1

For the Mirantis Container Runtime (MCR) release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 5.17.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-305.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v14.2.19

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20210716222903

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.2.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.1

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.5.9


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.8.0-17-g63ec424/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-373-gae771bb4/lcm-agent

Helm charts Updated

descheduler

https://binary.mirantis.com/core/helm/descheduler-1.23.2.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.23.2.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.23.2.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.23.2.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-372-g7e042f4d

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta Updated

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-22.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.1.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-33.tgz

elasticsearch-curator Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-6.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-25.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-93.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-105.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

kibana Updated

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-27.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-12.tgz

metricbeat Updated

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-12.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-30.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-158.tgz

prometheus-blackbox-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-7.tgz

prometheus-es-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-10.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-16.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-13.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-538.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-20.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-20.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-16.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-16.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.4.1-20210312131419

alertmanager Updated

mirantis.azurecr.io/stacklight/alertmanager:v0.22.2

alertmanager-webhook-servicenow Updated

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210601141858

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch Updated

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20210601104922

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210602174807

grafana

mirantis.azurecr.io/stacklight/grafana:7.5.4

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.2

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana Updated

mirantis.azurecr.io/stacklight/kibana:7.10.2-20210601104911

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20210219112938

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.10.2

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20210323132924

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20200428121305

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210617140951

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20210607111404

spilo

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


5.16.0

This section outlines release notes for the Cluster release 5.16.0 that is introduced in the Mirantis Container Cloud release 2.9.0. This Cluster release supports Mirantis Kubernetes Engine 3.3.6, Mirantis Container Runtime 19.03.14, and Kubernetes 1.18.

For the list of known and resolved issues, refer to the Container Cloud release 2.9.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 5.16.0.


StackLight components upgrade
  • Upgraded PostgreSQL from version 12 to 13

  • Updated Elasticsearch, Kibana, and Metricbeat from version 7.6.1 to 7.10.2

Multinetwork configuration for Ceph

Implemented the capability to configure multiple networks for a Ceph cluster.

TLS for Ceph public endpoints

Implemented the capability to configure TLS for a Ceph cluster using a custom ingress rule for Ceph public endpoints.

Ceph RBD mirroring

Implemented the capability to enable RADOS Block Device (RBD) mirroring for Ceph pools.

Components versions

The following table lists the components versions of the Cluster release 5.16.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Components versions of the Cluster release 5.16.0

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.3.6 0

Container runtime

Mirantis Container Runtime

19.03.14 1

Distributed storage

Ceph

14.2.19 (Nautilus)

Rook

1.5.9

LCM

descheduler

0.8.0

Helm

2.16.11-40

helm-controller Updated

0.2.0-349-g4870b7f5

lcm-ansible Updated

0.7.0-9-g30acaae

lcm-agent Updated

0.2.0-349-g4870b7f5

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.4.1-20210312131419

Alertmanager

0.21.0

Alertmanager Webhook ServiceNow

0.1-20210426114325

Cerebro

0.9.3

Elasticsearch Updated

7.10.2-20210513065347

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd Updated

1.10.2-20210518100631

Grafana

7.5.4

Grafana Image Renderer

2.0.1

IAM Proxy

6.0.1

Kibana Updated

7.10.2-20210513065546

Metric Collector

0.1-20210219112938

Metricbeat Updated

7.10.2

Netchecker Deprecated

1.4.1

Patroni Updated

13-2.0p6-20210525081943

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.5.1-20210323132924

Prometheus MS Teams

1.4.2

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20200428121305

Prometheus Postgres Exporter Updated

0.9.0

Prometheus Relay

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier

0.3-20210323132354

sf-reporter

0.1-20201216142628

Telegraf

1.9.1-20210225142050

Telemeter

4.4.0-20200424

0

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

1
  • For the MCR release highlights, see MCR documentation: MCR release notes.

  • Due to the development limitations, the MCR upgrade to version 19.03.14 on existing Container Cloud clusters is not supported.

Artifacts

This section lists the components artifacts of the Cluster release 5.16.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-271.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v14.2.19

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20210521190241

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.2.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.1

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.5.9


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.7.0-9-g30acaae/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-349-g4870b7f5/lcm-agent

Helm charts Updated

descheduler

https://binary.mirantis.com/core/helm/descheduler-1.22.4.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.22.4.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.22.4.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.22.4.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-349-g4870b7f5

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-20.tgz

alertmanager-webhook-servicenow

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.1.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-31.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-2.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-20.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-83.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-102.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

kibana Updated

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-25.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-8.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-8.tgz

netchecker Deprecated

https://binary.mirantis.com/core/helm/netchecker-1.4.1.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-24.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-139.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-4.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-3.tgz

prometheus-msteams

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-11.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-492.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-20.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-20.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-12.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-12.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.4.1-20210312131419

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.21.0

alertmanager-webhook-servicenow

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210426114325

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch Updated

mirantis.azurecr.io/stacklight/elasticsearch:7.10.2-20210513065347

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210518100631

gce-proxy

mirantis.azurecr.io/stacklight/gce-proxy:1.11

grafana

mirantis.azurecr.io/stacklight/grafana:7.5.4

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.2

k8s-netchecker-agent Deprecated

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-agent:2019.1

k8s-netchecker-server Deprecated

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-server:2019.1

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana Updated

mirantis.azurecr.io/stacklight/kibana:7.10.2-20210513065546

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20210219112938

metricbeat Updated

mirantis.azurecr.io/stacklight/metricbeat:7.10.2

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20210323132924

prometheus-msteams

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20200428121305

prometheus-postgres-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.9.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210323132354

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20201216152628

spilo Updated

mirantis.azurecr.io/stacklight/spilo:13-2.0p6-20210525081943

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


5.15.0

This section outlines release notes for the Cluster release 5.15.0 that is introduced in the Mirantis Container Cloud release 2.8.0. This Cluster release supports Mirantis Kubernetes Engine 3.3.6, Mirantis Container Runtime 19.03.14, and Kubernetes 1.18.

For the list of known and resolved issues, refer to the Container Cloud release 2.8.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 5.15.0.


StackLight notifications to Microsoft Teams

Implemented the capability to enable Alertmanager to send notifications to a Microsoft Teams channel.

StackLight notifications to ServiceNow

Implemented the capability to enable Alertmanager to send notifications to ServiceNow. Also added the ServiceNowAuthFailure alert that will raise in case of failure to authenticate to ServiceNow.

StackLight log collection optimization

Improved the log collection mechanism by optimizing the existing and adding new log parsers for multiple Container Cloud components.

Ceph default configuration options

Enhanced Ceph Controller to automatically specify default configuration options for each Ceph cluster during the Ceph deployment.

Ceph KaaSCephCluster enhancements

Implemented the following Ceph enhancements in the KaaSCephCluster CR:

  • Added the capability to specify the rgw role using the roles parameter

  • Added the following parameters:

    • rookConfig to override the Ceph configuration options

    • useAsFullName to enable the Ceph block pool to use only the name value as a name

    • targetSizeRatio to specify the expected consumption of the Ceph cluster total capacity

    • SSLCert to use a custom TLS certificate to access the Ceph RGW endpoint

    • nodeGroups to easily define specifications for multiple Ceph nodes using lists, grouped by node lists or node labels

    • clients to specify the Ceph clients and their capabilities

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added the following detailed procedures:

  • Recovery of failed Ceph Monitors of a Ceph cluster.

  • Silencing of StackLight alerts, for example, for maintenance or before performing an update.

Components versions

The following table lists the components versions of the Cluster release 5.15.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Components versions of the Cluster release 5.15.0

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.3.6 0

Container runtime

Mirantis Container Runtime

19.03.14 1

Distributed storage Updated

Ceph

14.2.19 (Nautilus)

Rook

1.5.9

LCM

descheduler

0.8.0

Helm

2.16.11-40

helm-controller Updated

0.2.0-327-g5676f4e3

lcm-ansible Updated

0.6.0-19-g0004de6

lcm-agent Updated

0.2.0-327-g5676f4e3

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.4.1-20210312131419

Alertmanager

0.21.0

Alertmanager Webhook ServiceNow New

0.1-20210426114325

Cerebro

0.9.3

Elasticsearch

7.6.1

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20210301155825

Grafana Updated

7.5.4

Grafana Image Renderer

2.0.1

IAM Proxy

6.0.1

Kibana

7.6.1

Metric Collector

0.1-20210219112938

Metricbeat

7.6.1

Netchecker

1.4.1

Patroni

12-1.6p3

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.5.1-20210323132924

Prometheus MS Teams New

1.4.2

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20200428121305

Prometheus Postgres Exporter

0.8.0-20201006113956

Prometheus Relay

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier

0.3-20210323132354

sf-reporter

0.1-20201216142628

Telegraf

1.9.1-20210225142050

Telemeter

4.4.0-20200424

0

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

1
  • For the MCR release highlights, see MCR documentation: MCR release notes.

  • Due to the development limitations, the MCR upgrade to version 19.03.14 on existing Container Cloud clusters is not supported.

Artifacts

This section lists the components artifacts of the Cluster release 5.15.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-242.tgz

Docker images

ceph Updated

mirantis.azurecr.io/ceph/ceph:v14.2.19

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20210425091701

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.2.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner Updated

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.1

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook Updated

mirantis.azurecr.io/ceph/rook/ceph:v1.5.9


LCM artifacts

Artifact

Component

Path

Binaries Updated

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.6.0-19-g0004de6/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-327-g5676f4e3/lcm-agent

Helm charts Updated

descheduler

https://binary.mirantis.com/core/helm/descheduler-1.20.2.tgz

managed-lcm-api

https://binary.mirantis.com/core/helm/managed-lcm-api-1.20.2.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.20.2.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.20.2.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-327-g5676f4e3

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta Updated

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-20.tgz

alertmanager-webhook-servicenow New

https://binary.mirantis.com/stacklight/helm/alertmanager-webhook-servicenow-0.1.0-mcp-3.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-29.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-2.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-20.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-79.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-98.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-20.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-8.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-8.tgz

netchecker

https://binary.mirantis.com/core/helm/netchecker-1.4.1.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-21.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-130.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-4.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-3.tgz

prometheus-msteams New

https://binary.mirantis.com/stacklight/helm/prometheus-msteams-0.1.0-mcp-2.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-11.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-464.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-20.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-20.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-12.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-12.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.4.1-20210312131419

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.21.0

alertmanager-webhook-servicenow New

mirantis.azurecr.io/stacklight/alertmanager-webhook-servicenow:v0.1-20210426114325

alpine-python3-requests

mirantis.azurecr.io/stacklight/alpine-python3-requests:latest-20200618

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.6.1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210301155825

gce-proxy

mirantis.azurecr.io/stacklight/gce-proxy:1.11

grafana Updated

mirantis.azurecr.io/stacklight/grafana:7.5.4

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.2

k8s-netchecker-agent

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-agent:2019.1

k8s-netchecker-server

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-server:2019.1

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana

mirantis.azurecr.io/stacklight/kibana:7.6.1

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20210219112938

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.6.1

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20210323132924

prometheus-msteams New

mirantis.azurecr.io/stacklight/prometheus-msteams:v1.4.2

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20200428121305

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.8.0-20201006113956

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210323132354

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20201216152628

spilo

mirantis.azurecr.io/stacklight/spilo:12-1.6p3

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225152050

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


5.14.0

This section outlines release notes for the Cluster release 5.14.0 that is introduced in the Mirantis Container Cloud release 2.7.0. This Cluster release supports Mirantis Kubernetes Engine 3.3.6, Mirantis Container Runtime 19.03.14, and Kubernetes 1.18.

For the list of known and resolved issues, refer to the Container Cloud release 2.7.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 5.14.0.


Log collection optimization

Improved the log collection mechanism by optimizing the existing and adding new log parsers for multiple Container Cloud components.

Dedicated network for the Ceph distributed storage traffic

TECHNOLOGY PREVIEW

Added the possibility to configure dedicated networks for the Ceph cluster access and replication traffic using dedicated subnets. Container Cloud automatically configures Ceph to use the addresses from the dedicated subnets after you assign the corresponding addresses to the storage nodes.

Ceph Multisite configuration

TECHNOLOGY PREVIEW

Implemented the capability to enable the Ceph Multisite configuration that allows object storage to replicate its data over multiple Ceph clusters. Using Multisite, such object storage is independent and isolated from another object storage in the cluster.

Ceph troubleshooting documentation

On top of continuous improvements delivered to the existing Container Cloud guides, added the Troubleshoot Ceph section to the Operations Guide. This section now contains a detailed procedure to recover a failed or accidentally removed Ceph cluster.

Components versions

The following table lists the components versions of the Cluster release 5.14.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Components versions of the Cluster release 5.14.0

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.3.6 0

Container runtime

Mirantis Container Runtime

19.03.14 1

Distributed storage

Ceph

14.2.12 (Nautilus)

Rook

1.5.5

LCM

descheduler

0.8.0

Helm

2.16.11-40

helm-controller Updated

0.2.0-297-g8c87ad67

lcm-ansible Updated

0.5.0-10-gdd307e6

lcm-agent Updated

0.2.0-300-ga874e0df

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta Updated

8.4.1-20210312131419

Alertmanager

0.21.0

Cerebro

0.9.3

Elasticsearch

7.6.1

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20210301155825

Grafana

7.3.7

Grafana Image Renderer

2.0.1

IAM Proxy

6.0.1

Kibana

7.6.1

Metric Collector

0.1-20210219112938

Metricbeat

7.6.1

Netchecker

1.4.1

Patroni

12-1.6p3

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter Updated

0.5.1-20210323132924

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20200428121305

Prometheus Postgres Exporter

0.8.0-20201006113956

Prometheus Relay Updated

0.3-20210317133316

Pushgateway

1.2.0

sf-notifier Updated

0.3-20210323132354

sf-reporter

0.1-20201216142628

Telegraf

1.9.1-20210225142050

Telemeter

4.4.0-20200424

0

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

1
  • For the MCR release highlights, see MCR documentation: MCR release notes.

  • Due to the development limitations, the MCR upgrade to version 19.03.14 on existing Container Cloud clusters is not supported.

Artifacts

This section lists the components artifacts of the Cluster release 5.14.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-177.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v14.2.12

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20210322210534

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.2.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.0

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.5.5


LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible Updated

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.5.0-10-gdd307e6/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-300-ga874e0df/lcm-agent

Helm charts

descheduler Updated

https://binary.mirantis.com/core/helm/descheduler-1.19.1.tgz

managed-lcm-api Updated

https://binary.mirantis.com/core/helm/managed-lcm-api-1.19.1.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.19.1.tgz

metrics-server Updated

https://binary.mirantis.com/core/helm/metrics-server-1.19.1.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-297-g8c87ad67

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-15.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-22.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-2.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-17.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-61.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-93.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-20.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-8.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-8.tgz

netchecker

https://binary.mirantis.com/core/helm/netchecker-1.4.1.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-20.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-124.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-4.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-3.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-11.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-438.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-20.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-20.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-12.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-12.tgz

Docker images

alerta Updated

mirantis.azurecr.io/stacklight/alerta-web:8.4.1-20210312131419

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.21.0

alpine-python3-requests

mirantis.azurecr.io/stacklight/alpine-python3-requests:latest-20200618

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.6.1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210301155825

gce-proxy

mirantis.azurecr.io/stacklight/gce-proxy:1.11

grafana

mirantis.azurecr.io/stacklight/grafana:7.3.7

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.2

k8s-netchecker-agent

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-agent:2019.1

k8s-netchecker-server

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-server:2019.1

k8s-sidecar Updated

mirantis.azurecr.io/stacklight/k8s-sidecar:1.10.8

kibana

mirantis.azurecr.io/stacklight/kibana:7.6.1

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20210219112938

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.6.1

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20210323132924

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20200428121305

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.8.0-20201006113956

prometheus-relay Updated

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20210317133316

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20210323132354

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20201216142628

spilo

mirantis.azurecr.io/stacklight/spilo:12-1.6p3

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225142050

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0


5.13.0

This section outlines release notes for the Cluster release 5.13.0 that is introduced in the Mirantis Container Cloud release 2.6.0. This Cluster release supports Mirantis Kubernetes Engine 3.3.6, Mirantis Container Runtime 19.03.14, and Kubernetes 1.18.

For the list of known and resolved issues, refer to the Container Cloud release 2.6.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 5.13.0.


StackLight logging levels

Significantly enhanced the StackLight log collection mechanism to avoid collecting and keeping an excessive amount of log messages when it is not essential. Now, during or after deployment of StackLight, you can select one of the 9 available logging levels depending on the required severity. The default logging level is INFO.

Remote logging to syslog

Implemented the capability to configure StackLight to forward all logs to an external syslog server. In this case, StackLight will send logs both to the syslog server and to Elasticsearch, which is the default target.

Hyperconverged Ceph

Technology Preview

Implemented the capability to configure Ceph Controller to start pods on the taint nodes and manage the resources of Ceph nodes. Now, when bootstrapping a new management or managed cluster, you can specify requests, limits, or tolerations for Ceph resources. You can also configure resource management for an existing Ceph cluster. However, such approach may cause downtime.

Ceph objectStorage section in KaasCephCluster

Improved user experience by moving the rgw section of the KaasCephCluster CR to a common objectStorage section that now includes all RADOS Gateway configurations of a Ceph cluster. The spec.rgw section is deprecated. However, if you continue using spec.rgw, it will be automatically translated into the new objectStorage.rgw section during the Container Cloud update to 2.6.0.

Ceph maintenance orchestration

Implemented the capability to enable Ceph maintenance mode using the maintenance flag not only during a managed cluster update but also when required. However, Mirantis does not recommend enabling maintenance on production deployments other than during update.

Components versions

The following table lists the components versions of the Cluster release 5.13.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Components versions of the Cluster release 5.13.0

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.3.6 0

Container runtime

Mirantis Container Runtime

19.03.14 1

Distributed storage

Ceph

14.2.12 (Nautilus)

Rook

1.5.5

LCM

descheduler

0.8.0

Helm

2.16.11-40

helm-controller Updated

0.2.0-289-gd7e9fa9c

lcm-ansible Updated

0.4.0-4-ga2bb104

lcm-agent Updated

0.2.0-289-gd7e9fa9c

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.0.2-20201014133832

Alertmanager

0.21.0

Cerebro

0.9.3

Elasticsearch

7.6.1

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd Updated

1.10.2-20210301155825

Grafana Updated

7.3.7

Grafana Image Renderer Updated

2.0.1

IAM Proxy

6.0.1

Kibana

7.6.1

Metric Collector Updated

0.1-20210219112938

Metricbeat

7.6.1

Netchecker

1.4.1

Patroni

12-1.6p3

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.5.1-20201002144823

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20200428121305

Prometheus Postgres Exporter

0.8.0-20201006113956

Prometheus Relay

0.3-20200519054052

Pushgateway

1.2.0

sf-notifier

0.3-20201216142028

sf-reporter

0.1-20201216142628

Telegraf Updated

1.9.1-20210225142050

Telemeter

4.4.0-20200424

0

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

1
  • For the MCR release highlights, see MCR documentation: MCR release notes.

  • Due to the development limitations, the MCR upgrade to version 19.03.14 on existing Container Cloud clusters is not supported.

Artifacts

This section lists the components artifacts of the Cluster release 5.13.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-165.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v14.2.12

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20210309160354

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.2.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.0

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.5.5


LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible Updated

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.4.0-4-ga2bb104/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-289-gd7e9fa9c/lcm-agent

Helm charts

descheduler Updated

https://binary.mirantis.com/core/helm/descheduler-1.18.1.tgz

managed-lcm-api Updated

https://binary.mirantis.com/core/helm/managed-lcm-api-1.18.1.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.18.1.tgz

metrics-server Updated

https://binary.mirantis.com/core/helm/metrics-server-1.18.1.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-289-gd7e9fa9c

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta Updated

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-15.tgz

cerebro

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-22.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-2.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-16.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-44.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-93.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-20.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-8.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-8.tgz

netchecker

https://binary.mirantis.com/core/helm/netchecker-1.4.1.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-20.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-121.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-4.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-3.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-11.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-426.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-20.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-20.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-12.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-12.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.0.2-20201014133832

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.21.0

alpine-python3-requests

mirantis.azurecr.io/stacklight/alpine-python3-requests:latest-20200618

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.6.1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd Updated

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20210301155825

gce-proxy

mirantis.azurecr.io/stacklight/gce-proxy:1.11

grafana Updated

mirantis.azurecr.io/stacklight/grafana:7.3.7

grafana-image-renderer Updated

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.1

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.2

k8s-netchecker-agent

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-agent:2019.1

k8s-netchecker-server

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-server:2019.1

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:0.1.178

kibana

mirantis.azurecr.io/stacklight/kibana:7.6.1

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20210219112938

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.6.1

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20201002144823

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20200428121305

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.8.0-20201006113956

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20200519054052

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20201216142028

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20201216142628

spilo

mirantis.azurecr.io/stacklight/spilo:12-1.6p3

telegraf Updated

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20210225142050

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0

5.12.0

This section outlines release notes for the Cluster release 5.12.0 that is introduced in the Mirantis Container Cloud release 2.5.0. This Cluster release supports Kubernetes 1.18 and Mirantis Container Runtime 19.03.14 as well as introduces support for the updated version of Mirantis Kubernetes Engine 3.3.6.

For the list of known and resolved issues, refer to the Container Cloud release 2.5.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 5.12.0.


Ceph maintenance label

Implemented the maintenance label to set for Ceph during a managed cluster update. This prevents Ceph rebalance leading to data loss during a managed cluster update.

RGW check box in Container Cloud web UI

Implemented the Enable Object Storage checkbox in the Container Cloud web UI to allow enabling a single-instance RGW Object Storage when creating a Ceph cluster as described in Add a Ceph cluster.

Ceph RGW HA

Enhanced Ceph to support RADOS Gateway (RGW) high availability. Now, you can run multiple instances of Ceph RGW in active/active mode.

Cerebro support for StackLight

Enhanced StackLight by adding support for Cerebro, a web UI that visualizes health of Elasticsearch clusters and allows for convenient debugging. Cerebro is disabled by default.

StackLight proxy

Added proxy support for Alertmanager, Metric collector, Salesforce notifier and reporter, and Telemeter client. Now, these StackLight components automatically use the same proxy that is configured for Container Cloud clusters.

Note

Proxy handles only the HTTP and HTTPS traffic. Therefore, for clusters with limited or no Internet access, it is not possible to set up Alertmanager email notifications, which use SMTP, when proxy is used.

Note

Due to a limitation, StackLight fails to integrate with an external proxy with authentication handled by a proxy server. In such cases, the proxy server ignores the HTTP Authorization header for basic authentication passed by Prometheus Alertmanager. Therefore, use proxies without authentication or with authentication handled by a reverse proxy.

Components versions

The following table lists the components versions of the Cluster release 5.12.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Components versions of the Cluster release 5.12.0

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.3.6 0

Container runtime

Mirantis Container Runtime

19.03.14 1

Distributed storage

Ceph

14.2.12 (Nautilus)

Rook

1.5.5

LCM

descheduler

0.8.0

Helm

2.16.11-40

helm-controller Updated

0.2.0-258-ga2d72294

lcm-ansible Updated

0.3.0-10-g7c2a87e

lcm-agent Updated

0.2.0-258-ga2d72294

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.0.2-20201014133832

Alertmanager

0.21.0

Cerebro New

0.9.3

Elasticsearch

7.6.1

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20200609085335

Grafana

7.1.5

Grafana Image Renderer

2.0.0

IAM Proxy

6.0.1

Kibana

7.6.1

Metric Collector

0.1-20201222100033

Metricbeat

7.6.1

Netchecker

1.4.1

Patroni

12-1.6p3

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.5.1-20201002144823

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20200428121305

Prometheus Postgres Exporter

0.8.0-20201006113956

Prometheus Relay

0.3-20200519054052

Pushgateway

1.2.0

sf-notifier

0.3-20201216142028

sf-reporter

0.1-20201216142628

Telegraf

1.9.1-20201222194740

Telemeter

4.4.0-20200424

0

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

1
  • For the MCR release highlights, see MCR documentation: MCR release notes.

  • Due to the development limitations, the MCR upgrade to version 19.03.14 on existing Container Cloud clusters is not supported.

Artifacts

This section lists the components artifacts of the Cluster release 5.12.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-127.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v14.2.12

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20210201202754

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.2.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.0

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.5.5


LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible Updated

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.3.0-10-g7c2a87e/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-258-ga2d72294/lcm-agent

Helm charts

descheduler Updated

https://binary.mirantis.com/core/helm/descheduler-1.17.4.tgz

managed-lcm-api Updated

https://binary.mirantis.com/core/helm/managed-lcm-api-1.17.4.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.17.4.tgz

metrics-server Updated

https://binary.mirantis.com/core/helm/metrics-server-1.17.4.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-258-ga2d72294

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-13.tgz

cerebro New

https://binary.mirantis.com/stacklight/helm/cerebro-0.1.0-mcp-2.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-22.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-2.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-15.tgz

fluentd-elasticsearch

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-33.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-89.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-20.tgz

metric-collector

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-8.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-8.tgz

netchecker

https://binary.mirantis.com/core/helm/netchecker-1.4.1.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-19.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-119.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-4.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-3.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-11.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-413.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-20.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-20.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-12.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-12.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.0.2-20201014133832

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.21.0

alpine-python3-requests

mirantis.azurecr.io/stacklight/alpine-python3-requests:latest-20200618

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

cerebro New

mirantis.azurecr.io/stacklight/cerebro:0.9.3

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.6.1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20200609085335

gce-proxy

mirantis.azurecr.io/stacklight/gce-proxy:1.11

grafana

mirantis.azurecr.io/stacklight/grafana:7.1.5

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.0

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.2

k8s-netchecker-agent

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-agent:2019.1

k8s-netchecker-server

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-server:2019.1

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:0.1.178

kibana

mirantis.azurecr.io/stacklight/kibana:7.6.1

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20201222100033

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.6.1

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20201002144823

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20200428121305

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.8.0-20201006113956

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20200519054052

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20201216142028

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20201216142628

spilo

mirantis.azurecr.io/stacklight/spilo:12-1.6p3

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20201222194740

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq

mirantis.azurecr.io/stacklight/yq:v4.2.0

5.11.0

This section outlines release notes for the Cluster release 5.11.0 that is introduced in the Mirantis Container Cloud release 2.4.0. This Cluster release supports Kubernetes 1.18 and Mirantis Kubernetes Engine 3.3.4 as well as introduces support for the updated version of Mirantis Container Runtime 19.03.14.

Note

The Cluster release 5.11.0 supports only attachment of existing MKE 3.3.4 clusters.

For the deployment of new or attachment of existing clusters based on other supported MKE versions, the latest available Cluster releases are used.

For the list of known and resolved issues, refer to the Container Cloud release 2.4.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 5.11.0.


Alert inhibition rules

Implemented alert inhibition rules to provide a clearer view on the cloud status and simplify troubleshooting. Using alert inhibition rules, Alertmanager decreases alert noise by suppressing dependent alerts notifications. The feature is enabled by default. For details, see Alert dependencies.

Integration between Grafana and Kibana

Implemented integration between Grafana and Kibana by adding a View logs in Kibana link to the majority of Grafana dashboards, which allows you to immediately view contextually relevant logs through the Kibana web UI.

Telegraf alert

Implemented the TelegrafGatherErrors alert that raises if Telegraf fails to gather metrics.

Learn more

Telegraf alerts

Configuration of Ironic Telegraf input plugin

Added the ironic.insecure parameter for enabling or disabling the host and chain verification for bare metal Ironic monitoring.

Automatically defined cluster ID

Enhanced StackLight to automatically set clusterId that defines an ID of a Container Cloud cluster. Now, you do not need to set or modify this parameter manually when configuring the sf-notifier and sf-reporter services.

Components versions

The following table lists the components versions of the Cluster release 5.11.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Components versions of the Cluster release 5.11.0

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.3.4 0

Container runtime

Mirantis Container Runtime Updated

19.03.14 1

Distributed storage Updated

Ceph

14.2.12 (Nautilus)

Rook

1.5.5

LCM

descheduler

0.8.0

Helm

2.16.11-40

helm-controller

0.2.0-221-g32bd5f56

lcm-ansible Updated

0.2.0-394-g599b2a1

lcm-agent

0.2.0-221-g32bd5f56

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.0.2-20201014133832

Alertmanager

0.21.0

Elasticsearch

7.6.1

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20200609085335

Grafana

7.1.5

Grafana Image Renderer

2.0.0

IAM Proxy

6.0.1

Kibana

7.6.1

Metric Collector Updated

0.1-20201222100033

Metricbeat

7.6.1

Netchecker

1.4.1

Patroni

12-1.6p3

Prometheus

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.5.1-20201002144823

Prometheus libvirt Exporter

0.1-20200610164751

Prometheus Memcached Exporter

0.5.0

Prometheus MySQL Exporter

0.11.0

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20200428121305

Prometheus Postgres Exporter

0.8.0-20201006113956

Prometheus RabbitMQ Exporter

1.0.0-RC7.1

Prometheus Relay

0.3-20200519054052

Pushgateway

1.2.0

sf-notifier Updated

0.3-20201216142028

sf-reporter Updated

0.1-20201216142628

Telegraf Updated

1.9.1-20201222194740

Telemeter

4.4.0-20200424

0

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

1
  • For the MCR release highlights, see MCR documentation: MCR release notes.

  • Due to the development limitations, the MCR upgrade to version 19.03.14 on existing Container Cloud clusters is not supported.

Artifacts

This section lists the components artifacts of the Cluster release 5.11.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-111.tgz

Docker images Updated

ceph

mirantis.azurecr.io/ceph/ceph:v14.2.12

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20210120004212

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.2.1

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v2.1.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v2.1.0

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v4.0.0

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v3.1.0

csi-resizer New

mirantis.azurecr.io/ceph/k8scsi/csi-resizer:v1.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.5.5


LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible Updated

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.2.0-394-g599b2a1/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-221-g32bd5f56/lcm-agent

Helm charts

descheduler Updated

https://binary.mirantis.com/core/helm/descheduler-1.16.0.tgz

managed-lcm-api Updated

https://binary.mirantis.com/core/helm/managed-lcm-api-1.16.0.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.16.0.tgz

metrics-server Updated

https://binary.mirantis.com/core/helm/metrics-server-1.16.0.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-221-g32bd5f56

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-13.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-22.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-2.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-15.tgz

fluentd-elasticsearch

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-33.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-81.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-20.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-8.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-8.tgz

netchecker

https://binary.mirantis.com/core/helm/netchecker-1.4.1.tgz

patroni Updated

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-19.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-114.tgz

prometheus-blackbox-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-4.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-3.tgz

prometheus-libvirt-exporter

https://binary.mirantis.com/stacklight/heprometheus-libvirt-exporter-0.1.0-mcp-2.tgz

prometheus-memcached-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-memcached-exporter-0.1.0-mcp-1.tgz

prometheus-mysql-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-mysql-exporter-0.3.2-mcp-1.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

prometheus-rabbitmq-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-rabbitmq-exporter-0.4.1-mcp-1.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-11.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-10.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-398.tgz

telegraf-ds Updated

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-20.tgz

telegraf-s Updated

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-20.tgz

telemeter-server Updated

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-12.tgz

telemeter-client Updated

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-12.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.0.2-20201014133832

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.21.0

alpine-python3-requests

mirantis.azurecr.io/stacklight/alpine-python3-requests:latest-20200618

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.6.1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20200609085335

gce-proxy

mirantis.azurecr.io/stacklight/gce-proxy:1.11

grafana

mirantis.azurecr.io/stacklight/grafana:7.1.5

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.0

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.2

k8s-netchecker-agent

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-agent:2019.1

k8s-netchecker-server

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-server:2019.1

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:0.1.178

kibana

mirantis.azurecr.io/stacklight/kibana:7.6.1

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20201222100033

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.6.1

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20201002144823

prometheus-libvirt-exporter

mirantis.azurecr.io/stacklight/libvirt-exporter:v0.1-20200610164751

prometheus-memcached-exporter

mirantis.azurecr.io/stacklight/memcached-exporter:v0.5.0

prometheus-mysql-exporter

mirantis.azurecr.io/stacklight/mysqld-exporter:v0.11.0

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20200428121305

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.8.0-20201006113956

prometheus-rabbitmq-exporter

mirantis.azurecr.io/stacklight/rabbitmq-exporter:v1.0.0-RC7.1

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20200519054052

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20201216142028

sf-reporter Updated

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20201216142628

spilo

mirantis.azurecr.io/stacklight/spilo:12-1.6p3

telegraf Updated

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20201222194740

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

yq New

mirantis.azurecr.io/stacklight/yq:v4.2.0


5.10.0

This section outlines release notes for the Cluster release 5.10.0 that is introduced in the Mirantis Container Cloud release 2.3.0. This Cluster release supports Kubernetes 1.18 and introduces support for the latest versions of Mirantis Kubernetes Engine 3.3.4 and Mirantis Container Runtime 19.03.13.

For the list of known and resolved issues, refer to the Container Cloud release 2.3.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 5.10.0.


Ceph Object Storage support

Enhanced Ceph to support RADOS Gateway (RGW) Object Storage.

Ceph state verification

Implemented the capability to obtain detailed information on the Ceph cluster state, including Ceph logs, Ceph OSDs state, and a list of Ceph pools.

Components versions

The following table lists the components versions of the Cluster release 5.10.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine Updated

3.3.4 0

Container runtime

Mirantis Container Runtime Updated

19.03.13 1

Distributed storage

Ceph

14.2.11 (Nautilus)

Rook

1.4.4

LCM

descheduler

0.8.0

Helm Updated

2.16.11-40

helm-controller Updated

0.2.0-221-g32bd5f56

lcm-ansible Updated

0.2.0-381-g720ec96

lcm-agent Updated

0.2.0-221-g32bd5f56

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

8.0.2-20201014133832

Alertmanager

0.21.0

Elasticsearch

7.6.1

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20200609085335

Grafana

7.1.5

Grafana Image Renderer

2.0.0

IAM Proxy

6.0.1

Kibana

7.6.1

Metric Collector Updated

0.1-20201120155524

Metricbeat

7.6.1

Netchecker

1.4.1

Patroni

12-1.6p3

Prometheus Updated

2.22.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.5.1-20201002144823

Prometheus libvirt Exporter

0.1-20200610164751

Prometheus Memcached Exporter

0.5.0

Prometheus MySQL Exporter

0.11.0

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20200428121305

Prometheus Postgres Exporter

0.8.0-20201006113956

Prometheus RabbitMQ Exporter Updated

v1.0.0-RC7.1

Prometheus Relay

0.3-20200519054052

Pushgateway

1.2.0

sf-notifier

0.3-20201001081256

sf-reporter

0.1-20200219140217

Telegraf Updated

1.9.1-20201120081248

Telemeter

4.4.0-20200424

0

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

1
  • For the MCR release highlights, see MCR documentation: MCR release notes.

  • Due to the development limitations, the MCR upgrade to version 19.03.14 on existing Container Cloud clusters is not supported.

Artifacts

This section lists the components artifacts of the Cluster release 5.10.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-95.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v14.2.11

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20201215142221

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.1.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v1.2.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v1.6.0

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v2.1.1

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v2.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.4.4


LCM artifacts

Artifact

Component

Path

Binaries

lcm-ansible Updated

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.2.0-381-g720ec96/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-221-g32bd5f56/lcm-agent

Helm charts

descheduler Updated

https://binary.mirantis.com/core/helm/descheduler-1.15.1.tgz

managed-lcm-api New

https://binary.mirantis.com/core/helm/managed-lcm-api-1.15.1.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.15.1.tgz

metrics-server Updated

https://binary.mirantis.com/core/helm/metrics-server-1.15.1.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm Updated

mirantis.azurecr.io/lcm/helm/tiller:v2.16.11-40

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-221-g32bd5f56

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-13.tgz

elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-22.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-2.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-15.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-33.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-74.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-20.tgz

metric-collector Updated

https://binary.mirantis.com/stacklight/helm/metric-collector-0.2.0-mcp-5.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-8.tgz

netchecker

https://binary.mirantis.com/core/helm/netchecker-1.4.1.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-17.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-102.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-3.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-3.tgz

prometheus-libvirt-exporter

https://binary.mirantis.com/stacklight/heprometheus-libvirt-exporter-0.1.0-mcp-2.tgz

prometheus-memcached-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-memcached-exporter-0.1.0-mcp-1.tgz

prometheus-mysql-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-mysql-exporter-0.3.2-mcp-1.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

prometheus-rabbitmq-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-rabbitmq-exporter-0.4.1-mcp-1.tgz

sf-notifier Updated

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-9.tgz

sf-reporter Updated

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-8.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-354.tgz

telegraf-ds Updated

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-19.tgz

telegraf-s Updated

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-19.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-11.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-11.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:8.0.2-20201014133832

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.21.0

alpine-python3-requests

mirantis.azurecr.io/stacklight/alpine-python3-requests:latest-20200618

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.6.1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20200609085335

gce-proxy

mirantis.azurecr.io/stacklight/gce-proxy:1.11

grafana

mirantis.azurecr.io/stacklight/grafana:7.1.5

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.0

kubectl

mirantis.azurecr.io/stacklight/kubectl:1.19.2

k8s-netchecker-agent

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-agent:2019.1

k8s-netchecker-server

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-server:2019.1

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:0.1.178

kibana

mirantis.azurecr.io/stacklight/kibana:7.6.1

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

metric-collector Updated

mirantis.azurecr.io/stacklight/metric-collector:v0.1-20201120155524

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.6.1

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v2.22.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20201002144823

prometheus-libvirt-exporter

mirantis.azurecr.io/stacklight/libvirt-exporter:v0.1-20200610164751

prometheus-memcached-exporter

mirantis.azurecr.io/stacklight/memcached-exporter:v0.5.0

prometheus-mysql-exporter

mirantis.azurecr.io/stacklight/mysqld-exporter:v0.11.0

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20200428121305

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.8.0-20201006113956

prometheus-rabbitmq-exporter Updated

mirantis.azurecr.io/stacklight/rabbitmq-exporter:v1.0.0-RC7.1

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20200519054052

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20201001081256

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20200219140217

spilo

mirantis.azurecr.io/stacklight/spilo:12-1.6p3

telegraf Updated

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20201120081248

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

5.9.0

This section outlines release notes for the Cluster release 5.9.0 that is introduced in the Mirantis Container Cloud release 2.2.0 and supports Mirantis Kubernetes Engine 3.3.3, Mirantis Container Runtime 19.03.12, and Kubernetes 1.18.

For the list of known and resolved issues, refer to the Container Cloud release 2.2.0 section.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 5.9.0.


Alerta upgrade

Upgraded Alerta from version 7.4.4 to 8.0.2.

File descriptors monitoring

Enhanced StackLight to monitor the number of file descriptors on nodes and raise FileDescriptorUsage* alerts when a node uses 80%, 90%, or 95% of file descriptors.

Learn more

General node alerts

Alerts improvements
  • Added the SSLProbesFailing alert that raises in case of an SSL certificate probes failure.

  • Improved alerts descriptions and raise conditions.

Components versions

The following table lists the components versions of the Cluster release 5.9.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Components versions of the Cluster release 5.9.0

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.3.3 0

Container runtime

Mirantis Container Runtime

19.03.12 1

Distributed storage

Ceph Updated

14.2.11 (Nautilus)

Rook Updated

1.4.4

LCM

ansible-docker Updated

0.3.5-147-g18f3b44

descheduler

0.8.0

Helm

2.16.9-39

helm-controller Updated

0.2.0-178-g8cc488f8

lcm-ansible Updated

0.2.0-132-g49f7591

lcm-agent Updated

0.2.0-178-g8cc488f8

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta Updated

8.0.2-20201014133832

Alertmanager

0.21.0

Elasticsearch

7.6.1

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20200609085335

Grafana

7.1.5

Grafana Image Renderer

2.0.0

IAM Proxy

6.0.1

Kibana

7.6.1

MCC Metric Collector

0.1-20201005141816

Metricbeat

7.6.1

Netchecker

1.4.1

Patroni

12-1.6p3

Prometheus Updated

2.19.3

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.5.1-20201002144823

Prometheus libvirt Exporter

0.1-20200610164751

Prometheus Memcached Exporter

0.5.0

Prometheus MySQL Exporter

0.11.0

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20200428121305

Prometheus Postgres Exporter

0.8.0-20201006113956

Prometheus RabbitMQ Exporter

0.29.0

Prometheus Relay

0.3-20200519054052

Pushgateway

1.2.0

sf-notifier Updated

0.3-20201001081256

sf-reporter

0.1-20200219140217

telegraf-ds

1.9.1-20200901112858

telegraf-s

1.9.1-20200901112858

Telemeter

4.4.0-20200424

0

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

1

For the MCR release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 5.9.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-33.tgz

Docker images

ceph Updated

mirantis.azurecr.io/ceph/ceph:v14.2.11

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20201022081323

cephcsi Updated

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v3.1.0

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v1.2.0

csi-provisioner Updated

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v1.6.0

csi-snapshotter Updated

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v2.1.1

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v2.1.0

rook Updated

mirantis.azurecr.io/ceph/rook/ceph:v1.4.4


LCM artifacts

Artifact

Component

Path

Binaries

ansible-docker Updated

https://binary.mirantis.com/lcm/bin/ansible-docker/v0.3.5-147-g18f3b44/ansible-docker.tar.gz

lcm-ansible Updated

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.2.0-132-g49f7591-1/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-178-g8cc488f8/lcm-agent

Helm charts

descheduler Updated

https://binary.mirantis.com/core/helm/descheduler-1.14.0.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.14.0.tgz

metrics-server Updated

https://binary.mirantis.com/core/helm/metrics-server-1.14.0.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.9-39

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-178-g8cc488f8

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta Updated

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-13.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-20.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-2.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-15.tgz

fluentd-elasticsearch Updated

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-28.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-66.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-20.tgz

mcc-metric-collector

https://binary.mirantis.com/stacklight/helm/mcc-metric-collector-0.1.0-mcp-22.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-8.tgz

netchecker

https://binary.mirantis.com/core/helm/netchecker-1.4.1.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-17.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-83.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-3.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-3.tgz

prometheus-libvirt-exporter

https://binary.mirantis.com/stacklight/heprometheus-libvirt-exporter-0.1.0-mcp-2.tgz

prometheus-memcached-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-memcached-exporter-0.1.0-mcp-1.tgz

prometheus-mysql-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-mysql-exporter-0.3.2-mcp-1.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

prometheus-rabbitmq-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-rabbitmq-exporter-0.4.1-mcp-1.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-5.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-325.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-16.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-16.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-11.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-11.tgz

Docker images

alerta Updated

mirantis.azurecr.io/stacklight/alerta-web:8.0.2-20201014133832

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.21.0

alpine-python3-requests Updated

mirantis.azurecr.io/stacklight/alpine-python3-requests:latest-20200618

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.6.1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20200609085335

gce-proxy

mirantis.azurecr.io/stacklight/gce-proxy:1.11

grafana

mirantis.azurecr.io/stacklight/grafana:7.1.5

grafana-image-renderer

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.0

kubectl Updated

mirantis.azurecr.io/stacklight/kubectl:1.19.2

k8s-netchecker-agent

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-agent:2019.1

k8s-netchecker-server

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-server:2019.1

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:0.1.178

kibana

mirantis.azurecr.io/stacklight/kibana:7.6.1

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

mcc-metric-collector

mirantis.azurecr.io/stacklight/mcc-metric-collector:v0.1-20201005141816

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.6.1

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus Updated

mirantis.azurecr.io/stacklight/prometheus:v2.19.3

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20201002144823

prometheus-libvirt-exporter

mirantis.azurecr.io/stacklight/libvirt-exporter:v0.1-20200610164751

prometheus-memcached-exporter

mirantis.azurecr.io/stacklight/memcached-exporter:v0.5.0

prometheus-mysql-exporter

mirantis.azurecr.io/stacklight/mysqld-exporter:v0.11.0

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20200428121305

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.8.0-20201006113956

prometheus-rabbitmq-exporter

mirantis.azurecr.io/stacklight/rabbitmq-exporter:v0.29.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20200519054052

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20201001081256

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20200219140217

spilo

mirantis.azurecr.io/stacklight/spilo:12-1.6p3

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20200901112858

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

5.8.0

This section outlines release notes for the Cluster release 5.8.0 that is introduced in the Mirantis Container Cloud release 2.1.0 and supports Mirantis Kubernetes Engine 3.3.3, Mirantis Container Runtime 19.03.12, and Kubernetes 1.18.

For the list of known issues, refer to the Container Cloud release 2.1.0 Known issues.

Enhancements

This section outlines new features and enhancements introduced in the Cluster release 5.8.0.


Grafana improvements
  • Upgraded Grafana from version 6.6.2 to 7.1.5.

  • Introduced Grafana Image Renderer, a separate Grafana container in a pod to offload rendering of images from charts. Grafana Image Renderer is enabled by default.

  • Configured a home dashboard to replace the Installation/configuration panel that opens when you access Grafana. By default, Kubernetes Cluster is set as a home dashboard. However, you can set any of the available Grafana dashboards.

Clusters telemetry improvement in StackLight
  • Split the regional and management cluster function in StackLight telemetry. Now, the metrics from managed clusters are aggregated on regional clusters, then both regional and managed clusters metrics are sent from regional clusters to the management cluster.

  • Added the capability to filter panels by regions in the Clusters Overview and Telemeter Server Grafana dashboards.

Alerts improvements
  • Improved alerts descriptions and raise conditions.

  • Changed severity in some alerts to improve operability.

  • Improved raise conditions of some alerts by adding the for clause and unifying the existing for clauses.

Components versions

The following table lists the components versions of the Cluster release 5.8.0.

Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Components versions of the Cluster release 5.8.0

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.3.3 0

Container runtime

Mirantis Container Runtime

19.03.12 1

Distributed storage

Ceph

14.2.9 (Nautilus)

Rook

1.3.8

LCM

ansible-docker Updated

0.3.5-141-g1007cc9

descheduler

0.8.0

Helm Updated

2.16.9-39

helm-controller Updated

0.2.0-169-g5668304d

lcm-ansible Updated

0.2.0-119-g8f05f58-1

lcm-agent

0.2.0-149-g412c5a05

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

7.4.4-20200615123606

Alertmanager

0.21.0

Elasticsearch

7.6.1

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20200609085335

Grafana Updated

7.1.5

Grafana Image Renderer New

2.0.0

IAM Proxy

6.0.1

Kibana

7.6.1

MCC Metric Collector Updated

0.1-20201005141816

Metricbeat

7.6.1

Netchecker

1.4.1

Patroni

12-1.6p3

Prometheus

2.19.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter Updated

0.5.1-20201002144823

Prometheus libvirt Exporter

0.1-20200610164751

Prometheus Memcached Exporter

0.5.0

Prometheus MySQL Exporter

0.11.0

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20200428121305

Prometheus Postgres Exporter Updated

0.8.0-20201006113956

Prometheus RabbitMQ Exporter

0.29.0

Prometheus Relay

0.3-20200519054052

Pushgateway

1.2.0

sf-notifier Updated

0.3-20200813125431

sf-reporter

0.1-20200219140217

telegraf-ds Updated

1.9.1-20200901112858

telegraf-s Updated

1.9.1-20200901112858

Telemeter

4.4.0-20200424

0

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

1

For the MCR release highlights, see MCR documentation: MCR release notes.

Artifacts

This section lists the components artifacts of the Cluster release 5.8.0.


Note

The components that are newly added, updated, deprecated, or removed as compared to the previous release version, are marked with a corresponding superscript, for example, lcm-ansible Updated.

Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller Updated

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-18.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v14.2.9

ceph-controller Updated

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20200903151423

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v2.1.2

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v1.2.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v1.4.0

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v1.2.2

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v2.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.3.8


LCM artifacts

Artifact

Component

Path

Binaries

ansible-docker Updated

https://binary.mirantis.com/lcm/bin/ansible-docker/v0.3.5-141-g1007cc9/ansible-docker.tar.gz

lcm-ansible Updated

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.2.0-119-g8f05f58-1/lcm-ansible.tar.gz

lcm-agent Updated

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-169-g5668304d/lcm-agent

Helm charts

descheduler Updated

https://binary.mirantis.com/core/helm/descheduler-1.12.2.tgz

metallb Updated

https://binary.mirantis.com/core/helm/metallb-1.12.2.tgz

metrics-server Updated

https://binary.mirantis.com/core/helm/metrics-server-1.12.2.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm Updated

mirantis.azurecr.io/lcm/helm/tiller:v2.16.9-39

helm-controller Updated

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-169-g5668304d

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-12.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-20.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-2.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd Updated

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-15.tgz

fluentd-elasticsearch

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-25.tgz

grafana Updated

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-64.tgz

iam-proxy

https://binary.mirantis.com/iam/helm/iam-proxy-0.2.2.tgz

kibana Updated

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-20.tgz

mcc-metric-collector

https://binary.mirantis.com/stacklight/helm/mcc-metric-collector-0.1.0-mcp-22.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-8.tgz

netchecker

https://binary.mirantis.com/core/helm/netchecker-1.4.1.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-17.tgz

prometheus Updated

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-80.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-3.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-3.tgz

prometheus-libvirt-exporter

https://binary.mirantis.com/stacklight/heprometheus-libvirt-exporter-0.1.0-mcp-2.tgz

prometheus-memcached-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-memcached-exporter-0.1.0-mcp-1.tgz

prometheus-mysql-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-mysql-exporter-0.3.2-mcp-1.tgz

prometheus-nginx-exporter Updated

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-4.tgz

prometheus-rabbitmq-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-rabbitmq-exporter-0.4.1-mcp-1.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-5.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-6.tgz

stacklight Updated

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-312.tgz

telegraf-ds Updated

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-16.tgz

telegraf-s Updated

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-16.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-11.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-11.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:7.4.4-20200615123606

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.21.0

alpine-python3-requests

mirantis.azurecr.io/stacklight/alpine-python3-requests:latest-20200320

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.6.1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20200609085335

gce-proxy

mirantis.azurecr.io/stacklight/gce-proxy:1.11

grafana Updated

mirantis.azurecr.io/stacklight/grafana:7.1.5

grafana-image-renderer New

mirantis.azurecr.io/stacklight/grafana-image-renderer:2.0.0

kubectl New

mirantis.azurecr.io/stacklight/kubectl:1.15.3

k8s-netchecker-agent

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-agent:2019.1

k8s-netchecker-server

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-server:2019.1

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:0.1.178

kibana

mirantis.azurecr.io/stacklight/kibana:7.6.1

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

mcc-metric-collector Updated

mirantis.azurecr.io/stacklight/mcc-metric-collector:v0.1-20201005141816

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.6.1

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.19.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20201002144823

prometheus-libvirt-exporter

mirantis.azurecr.io/stacklight/libvirt-exporter:v0.1-20200610164751

prometheus-memcached-exporter

mirantis.azurecr.io/stacklight/memcached-exporter:v0.5.0

prometheus-mysql-exporter

mirantis.azurecr.io/stacklight/mysqld-exporter:v0.11.0

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20200428121305

prometheus-postgres-exporter Updated

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.8.0-20201006113956

prometheus-rabbitmq-exporter

mirantis.azurecr.io/stacklight/rabbitmq-exporter:v0.29.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20200519054052

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier Updated

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20200813125431

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20200219140217

spilo

mirantis.azurecr.io/stacklight/spilo:12-1.6p3

telegraf Updated

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20200901112858

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

5.7.0

This section outlines release notes for the Cluster release 5.7.0 that is introduced in the Mirantis Container Cloud release 2.0.0 and supports Mirantis Kubernetes Engine 3.3.3, Mirantis Container Runtime 19.03.12, and Kubernetes 1.18.

For the list of known issues, refer to the Container Cloud release 2.0.0 Known issues.

Components versions

The following table lists the components versions of the Cluster release 5.7.0.

Components versions of the Cluster release 5.7.0

Component

Application/Service

Version

Cluster orchestration

Mirantis Kubernetes Engine

3.3.3 0

Container runtime

Mirantis Container Runtime

19.03.12 1

Distributed storage

Ceph

14.2.9 (Nautilus)

Rook

1.3.8

LCM

ansible-docker

0.3.5-136-g38653c7

descheduler

0.8.0

Helm

2.16.7-38

helm-controller

0.2.0-149-g412c5a05

lcm-ansible

0.2.0-110-g63cf88b

lcm-agent

0.2.0-149-g412c5a05

metallb-controller

0.9.3-1

metrics-server

0.3.6-1

StackLight

Alerta

7.4.4-20200615123606

Alertmanager

0.21.0

Elasticsearch

7.6.1

Elasticsearch Curator

5.7.6

Elasticsearch Exporter

1.0.2

Fluentd

1.10.2-20200609085335

Grafana

6.6.2

IAM Proxy

6.0.1

Kibana

7.6.1

MCC Metric Collector

0.1-20200806113043

Metricbeat

7.6.1

Netchecker

1.4.1

Patroni

12-1.6p3

Prometheus

2.19.2

Prometheus Blackbox Exporter

0.14.0

Prometheus ES Exporter

0.5.1-20200313132957

Prometheus libvirt Exporter

0.1-20200610164751

Prometheus Memcached Exporter

0.5.0

Prometheus MySQL Exporter

0.11.0

Prometheus Node Exporter

1.0.1

Prometheus NGINX Exporter

0.6.0

Prometheus Patroni Exporter

0.1-20200428121305

Prometheus Postgres Exporter

0.8.0-20200715102834

Prometheus RabbitMQ Exporter

0.29.0

Prometheus Relay

0.3-20200519054052

Pushgateway

1.2.0

sf-notifier

0.3-20200430122138

sf-reporter

0.1-20200219140217

telegraf-ds

1.9.1-20200806073506

telegraf-s

1.9.1-20200806073506

Telemeter

4.4.0-20200424

0

For the MKE release highlights and components versions, see MKE documentation: MKE release notes.

1
  • For the MCR release highlights, see MCR documentation: MCR release notes.

  • Due to the development limitations, the MCR upgrade to version 19.03.14 on existing Container Cloud clusters is not supported.

Artifacts

This section lists the components artifacts of the Cluster release 5.7.0.


Ceph artifacts

Artifact

Component

Path

Helm chart

ceph-controller

https://binary.mirantis.com/ceph/helm/ceph-operator-1.0.0-mcp-16.tgz

Docker images

ceph

mirantis.azurecr.io/ceph/ceph:v14.2.9

ceph-controller

mirantis.azurecr.io/ceph/mcp/ceph-controller:v1.0.0-20200805103414

cephcsi

mirantis.azurecr.io/ceph/cephcsi/cephcsi:v2.1.2

csi-node-driver-registrar

mirantis.azurecr.io/ceph/k8scsi/csi-node-driver-registrar:v1.2.0

csi-provisioner

mirantis.azurecr.io/ceph/k8scsi/csi-provisioner:v1.4.0

csi-snapshotter

mirantis.azurecr.io/ceph/k8scsi/csi-snapshotter:v1.2.2

csi-attacher

mirantis.azurecr.io/ceph/k8scsi/csi-attacher:v2.1.0

rook

mirantis.azurecr.io/ceph/rook/ceph:v1.3.8


LCM artifacts

Artifact

Component

Path

Binaries

ansible-docker

https://binary.mirantis.com/lcm/bin/ansible-docker/v0.3.5-136-g38653c7/ansible-docker.tar.gz

lcm-ansible

https://binary.mirantis.com/lcm/bin/lcm-ansible/v0.2.0-110-g63cf88b/lcm-ansible.tar.gz

lcm-agent

https://binary.mirantis.com/lcm/bin/lcm-agent/v0.2.0-149-g412c5a05/lcm-agent

Helm charts

descheduler

https://binary.mirantis.com/core/helm/descheduler-1.10.12.tgz

metallb

https://binary.mirantis.com/core/helm/metallb-1.10.12.tgz

metrics-server

https://binary.mirantis.com/core/helm/metrics-server-1.10.12.tgz

Docker images

descheduler

mirantis.azurecr.io/lcm/descheduler/v0.8.0

helm

mirantis.azurecr.io/lcm/helm/tiller:v2.16.9-39

helm-controller

mirantis.azurecr.io/lcm/lcm-controller:v0.2.0-149-g412c5a05

metallb-controller

mirantis.azurecr.io/lcm/metallb/controller:v0.9.3-1

metallb-speaker

mirantis.azurecr.io/lcm/metallb/speaker:v0.9.3-1

metrics-server

mirantis.azurecr.io/lcm/metrics-server-amd64/v0.3.6-1


StackLight artifacts

Artifact

Component

Path

Helm charts

alerta

https://binary.mirantis.com/stacklight/helm/alerta-0.1.0-mcp-12.tgz

elasticsearch

https://binary.mirantis.com/stacklight/helm/elasticsearch-7.1.1-mcp-20.tgz

elasticsearch-curator

https://binary.mirantis.com/stacklight/helm/elasticsearch-curator-1.5.0-mcp-2.tgz

elasticsearch-exporter

https://binary.mirantis.com/stacklight/helm/elasticsearch-exporter-1.2.0-mcp-2.tgz

fluentd

https://binary.mirantis.com/stacklight/helm/fluentd-2.0.3-mcp-15.tgz

fluentd-elasticsearch

https://binary.mirantis.com/stacklight/helm/fluentd-elasticsearch-3.0.0-mcp-24.tgz

grafana

https://binary.mirantis.com/stacklight/helm/grafana-3.3.10-mcp-59.tgz

kibana

https://binary.mirantis.com/stacklight/helm/kibana-3.2.1-mcp-19.tgz

mcc-metric-collector

https://binary.mirantis.com/stacklight/helm/mcc-metric-collector-0.1.0-mcp-22.tgz

metricbeat

https://binary.mirantis.com/stacklight/helm/metricbeat-1.7.1-mcp-8.tgz

netchecker

https://binary.mirantis.com/core/helm/netchecker-1.4.1.tgz

patroni

https://binary.mirantis.com/stacklight/helm/patroni-0.15.1-mcp-17.tgz

prometheus

https://binary.mirantis.com/stacklight/helm/prometheus-8.11.4-mcp-73.tgz

prometheus-blackbox-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-blackbox-exporter-0.3.0-mcp-3.tgz

prometheus-es-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-es-exporter-1.0.0-mcp-3.tgz

prometheus-libvirt-exporter

https://binary.mirantis.com/stacklight/heprometheus-libvirt-exporter-0.1.0-mcp-2.tgz

prometheus-memcached-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-memcached-exporter-0.1.0-mcp-1.tgz

prometheus-mysql-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-mysql-exporter-0.3.2-mcp-1.tgz

prometheus-nginx-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-nginx-exporter-0.1.0-mcp-2.tgz

prometheus-rabbitmq-exporter

https://binary.mirantis.com/stacklight/helm/prometheus-rabbitmq-exporter-0.4.1-mcp-1.tgz

sf-notifier

https://binary.mirantis.com/stacklight/helm/sf-notifier-0.1.0-mcp-5.tgz

sf-reporter

https://binary.mirantis.com/stacklight/helm/sf-reporter-0.1.0-mcp-6.tgz

stacklight

https://binary.mirantis.com/stacklight/helm/stacklight-0.1.2-mcp-285.tgz

telegraf-ds

https://binary.mirantis.com/stacklight/helm/telegraf-ds-1.1.5-mcp-14.tgz

telegraf-s

https://binary.mirantis.com/stacklight/helm/telegraf-s-1.1.5-mcp-14.tgz

telemeter-server

https://binary.mirantis.com/stacklight/helm/telemeter-server-0.1.0-mcp-11.tgz

telemeter-client

https://binary.mirantis.com/stacklight/helm/telemeter-client-0.1.0-mcp-11.tgz

Docker images

alerta

mirantis.azurecr.io/stacklight/alerta-web:7.4.4-20200615123606

alertmanager

mirantis.azurecr.io/stacklight/alertmanager:v0.21.0

alpine-python3-requests

mirantis.azurecr.io/stacklight/alpine-python3-requests:latest-20200320

busybox

mirantis.azurecr.io/stacklight/busybox:1.30

configmap-reload

mirantis.azurecr.io/stacklight/configmap-reload:v0.3.0

curl

mirantis.azurecr.io/stacklight/curl:7.69.0

curl-jq

mirantis.azurecr.io/stacklight/curl-jq:1.5-1

elasticsearch

mirantis.azurecr.io/stacklight/elasticsearch:7.6.1

elasticsearch-curator

mirantis.azurecr.io/stacklight/curator:5.7.6

elasticsearch-exporter

mirantis.azurecr.io/stacklight/elasticsearch_exporter:1.0.2

fluentd

mirantis.azurecr.io/stacklight/fluentd:1.10.2-20200609085335

gce-proxy

mirantis.azurecr.io/stacklight/gce-proxy:1.11

grafana

mirantis.azurecr.io/stacklight/grafana:6.6.2

k8s-netchecker-agent

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-agent:2019.1

k8s-netchecker-server

mirantis.azurecr.io/lcm/kubernetes/k8s-netchecker-server:2019.1

k8s-sidecar

mirantis.azurecr.io/stacklight/k8s-sidecar:0.1.178

kibana

mirantis.azurecr.io/stacklight/kibana:7.6.1

kube-state-metrics

mirantis.azurecr.io/stacklight/kube-state-metrics:v1.9.2

mcc-metric-collector

mirantis.azurecr.io/stacklight/mcc-metric-collector:v0.1-20200806113043

metricbeat

mirantis.azurecr.io/stacklight/metricbeat:7.6.1

node-exporter

mirantis.azurecr.io/stacklight/node-exporter:v1.0.1

origin-telemeter

mirantis.azurecr.io/stacklight/origin-telemeter:4.4.0-20200424

prometheus

mirantis.azurecr.io/stacklight/prometheus:v2.19.2

prometheus-blackbox-exporter

mirantis.azurecr.io/stacklight/blackbox-exporter:v0.14.0

prometheus-es-exporter

mirantis.azurecr.io/stacklight/prometheus-es-exporter:v0.5.1-20200313132957

prometheus-libvirt-exporter

mirantis.azurecr.io/stacklight/libvirt-exporter:v0.1-20200610164751

prometheus-memcached-exporter

mirantis.azurecr.io/stacklight/memcached-exporter:v0.5.0

prometheus-mysql-exporter

mirantis.azurecr.io/stacklight/mysqld-exporter:v0.11.0

prometheus-nginx-exporter

mirantis.azurecr.io/stacklight/nginx-prometheus-exporter:0.6.0

prometheus-patroni-exporter

mirantis.azurecr.io/stacklight/prometheus-patroni-exporter:v0.1-20200428121305

prometheus-postgres-exporter

mirantis.azurecr.io/stacklight/prometheus-postgres-exporter:v0.8.0-20200715102834

prometheus-rabbitmq-exporter

mirantis.azurecr.io/stacklight/rabbitmq-exporter:v0.29.0

prometheus-relay

mirantis.azurecr.io/stacklight/prometheus-relay:v0.3-20200519054052

pushgateway

mirantis.azurecr.io/stacklight/pushgateway:v1.2.0

sf-notifier

mirantis.azurecr.io/stacklight/sf-notifier:v0.3-20200430122138

sf-reporter

mirantis.azurecr.io/stacklight/sf-reporter:v0.1-20200219140217

spilo

mirantis.azurecr.io/stacklight/spilo:12-1.6p3

telegraf

mirantis.azurecr.io/stacklight/telegraf:v1.9.1-20200806073506

telemeter-token-auth

mirantis.azurecr.io/stacklight/telemeter-token-auth:v0.1-20200406175600

See also

Patch releases

Patch releases

Since Container Cloud 2.23.2, the release train comprises several patch releases that Mirantis delivers on top of a major release mainly to incorporate security updates as soon as they become available without waiting for the next major release. By significantly reducing the time to provide fixes for Common Vulnerabilities and Exposures (CVE), patch releases protect your clusters from cyber threats and potential data breaches.

Major and patch versions update path

The primary distinction between major and patch product versions lies in the fact that major release versions introduce new functionalities, whereas patch release versions predominantly offer minor product enhancements, mostly CVE resolutions for your clusters.

Depending on your deployment needs, you can either update only between major Cluster releases or apply patch updates between major releases. Choosing the latter option ensures you receive security fixes as soon as they become available. Though, be prepared to update your cluster frequently, approximately once every three weeks. Otherwise, you can update only between major Cluster releases as each subsequent major Cluster release includes patch Cluster release updates of the previous major Cluster release.

Content delivery in major and patch releases

As compared to a major Cluster release update, a patch release update does not involve any public API or LCM changes, major version bumps of MKE or other major components, workloads evacuation. A patch cluster update only may require restart of containers running the Container Cloud controllers, MKE, Ceph, and StackLight services to update base images with related libraries and apply CVE fixes to images. The data plane is not affected.

The following table lists differences between content delivery in major releases as compared to patch releases:

Content delivery in major and patch releases

Content

Major release

Patch release

Major version upgrade of the major product components including but not limited to Ceph and StackLight 0

Patch version bumps of MKE and Kubernetes 1

Container runtime changes including Mirantis Container Runtime and containerd updates

Changes in public API

Changes in the Container Cloud lifecycle management

Host machine changes including host operating system updates and upgrades, kernel updates, and so on

2

CVE fixes for images

Fixes for known product issues

0

Some of StackLight sub-components may be updated for patch releases.

1

MKE patch version bumps are available since Container Cloud 2.24.3 (Cluster releases 15.0.2 and 14.0.2).

2

Kernel update in patch releases is available since Container Cloud 2.26.1 (Cluster releases 17.1.1 and 16.1.1).

Update paths for major vs patch releases

Management clusters obtain patch releases automatically the same way as major releases. Managed clusters use the same update delivery method as for the major Cluster release updates. New patch Cluster releases become available through the Container Cloud web UI after automatic upgrade of a management cluster to the latest patch Cluster release.

You may decide to use only major Cluster releases without updating to patch Cluster releases. In this case, you will perform updates from an N to N+1 major release.

Major Cluster releases include all patch updates of the previous major Cluster release. However, Mirantis recommends applying security fixes using patch releases as soon as they become available to avoid security threats and potentially achieve legal compliance.

If you delay the Container Cloud upgrade and schedule it at a later time as described in Schedule Mirantis Container Cloud updates, make sure to schedule a longer maintenance window as the upgrade queue can include several patch releases along with the major release upgrade.

For the update procedure, refer to Operations Guide: Update a patch Cluster release of a managed cluster.

Patch update schemes before and since 2.26.5

Starting from Container Cloud 2.26.5 (Cluster releases 16.1.5 and 17.1.5), Mirantis introduces a new update scheme for managed clusters allowing for the update path flexibility.

Update schemes comparison

Since Container Cloud 2.26.5

Before Container Cloud 2.26.5

The user can update a managed cluster to any patch version in the series even if a newer patch version has been released already.

Note

In Container Cloud patch releases 2.27.1 and 2.27.2, only the 16.2.x patch Cluster releases will be delivered with an automatic update of management clusters and the possibility to update non-MOSK managed clusters.

In parallel, 2.27.1 and 2.27.2 will include new 16.1.x and 17.1.x patches for MOSK 24.1.x. And the first 17.2.x patch Cluster release for MOSK 24.2.x will be delivered in 2.27.3. For details, see MOSK documentation: Update path for 24.1 and 24.2 series.

The user cannot update a managed cluster to the intermediate patch version in the series if a newer patch version has been released. For example, when the patch Cluster release 17.0.4 becomes available, you can update from 17.0.1 to 17.0.4 at once, but not from 17.0.1 to 17.0.2.

The user can always update to the newer major version from the latest patch version of the previous series. Additionally, there will be another possibility of major update during the course of the patch series from the patch version released immediately before the target major version.

If the cluster starts receiving patch releases, the user must apply the latest patch version in the series to be able to update to the following major release. For example, to obtain the major Cluster release 17.1.0 while using the patch Cluster release 17.0.2, you must update your cluster to the latest patch Cluster release 17.0.4 first.

Latest supported patch releases

The following table lists the latest Container Cloud 2.29.x patch releases and their supported Cluster releases that are being delivered on top of the Container Cloud major release 2.29.0. Click the required patch release link to learn more about its deliverables.

Container Cloud 2.29.x and supported patch Cluster releases

Patch release

Container Cloud

2.29.2

2.29.1

2.29.0

Release history

Release date

Apr 22, 2025

Mar 26, 2025

Mar 11, 2025

Patch Cluster releases (managed)

17.3.x
MOSK 24.3.x
17.3.6 + 24.3.3
17.3.5 + 24.3.2
17.3.4 + 24.3.1

17.3.5 + 24.3.2
17.3.4 + 24.3.1


17.3.4 + 24.3.1

16.4.x

16.4.1

16.3.x

16.3.6
16.3.5
16.3.4

16.3.5
16.3.4


16.3.4

Legend

Symbol

Definition

Cluster release is not included in the Container Cloud release yet.

Cluster release is deprecated, and you must update it to the latest supported Cluster release. The deprecated Cluster release will become unsupported in one of the following Container Cloud releases. Greenfield deployments based on a deprecated Cluster release are not supported. Use the latest supported Cluster release instead.

Deprecation notes

This section provides deprecation notes only about unsupported OpenStack cloud provider. The information about deprecated and removed functionality of the bare metal provider, Ceph, and StackLight was moved to MOSK documentation: Deprecation Notes.

Deprecated and removed features of the OpenStack cloud provider
Component

Deprecated in
Finally available in
Removed in
Comments

OpenStack-based clusters

2.28.4

2.28.5

2.29.0

Suspended support for OpenStack-based deployments for the sake of the MOSK product. Simultaneously, ceased performing functional integration testing of the OpenStack provider and removed the possibility to update an OpenStack-based cluster to Container Cloud 2.29.0 (Cluster release 16.4.0).

Therefore, the final supported version for this cloud provider is Container Cloud 2.28.5 (Cluster release 16.3.5). If you still require the feature, contact Mirantis support for further information.

Reference Application for workload monitoring

2.28.0

2.28.2

2.28.3

Deprecated support for Reference Application on non-MOSK managed clusters. Due to this deprecation, if the RefAppDown alert is firing in the cluster, disable refapp.enabled to prevent unnecessary alerts.

Note

For the feature support on MOSK deployments, refer to MOSK documentation: Deploy your first cloud application using automation.

Regional clusters

2.25.0

2.25.0

2.26.0

Suspended support for regional clusters of the same or different cloud provider type on a single management cluster. Additionally, suspended support for several regions on a single management cluster. Simultaneously, ceased performing functional integration testing of the feature and removed the related code in Container Cloud 2.26.0. If you still require this feature, contact Mirantis support for further information.

Bootstrap v1

2.25.0

2.25.0

2.26.0

Deprecated the bootstrap procedure using Bootstrap v1 in the sake of Bootstrap v2. For details, see Deploy Container Cloud using Bootstrap v2.

Attachment of MKE clusters

2.24.0

2.24.0

2.24.0

Suspended support for attachment of existing Mirantis Kubernetes Engine (MKE) clusters that were originally not deployed by Container Cloud. Also suspended support for all related features, such as sharing a Ceph cluster with an attached MKE cluster.