Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set will cover all product layers, including MOSK management (formerly MCC). This means everything you need will be in one place. The separate MCC documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Patch releases aim to significantly shorten the cycle of CVE fixes delivery
onto your MOSK deployments to help you avoid cyber
threats and data breaches.
Your management bare-metal cluster obtains patch releases automatically
the same way as major releases. A new patch MOSK release
version becomes available through the Container Cloud web UI after the
automatic upgrade of the management cluster.
It is not possible to update between the patch releases that belong to
different release series in one go. For example, you can update from
MOSK 23.1.1 to 23.1.2, but you cannot immediately update
from MOSK 23.1.x to 23.2.x because you need to update to
the major MOSK 23.2 release first.
Caution
If you delay the Container Cloud upgrade and schedule it at a
later time as described in Schedule Mirantis Container Cloud updates, make sure to
schedule a longer maintenance window as the upgrade queue can include
several patch releases along with the major release upgrade.
Read the Update notes part of the target MOSK
release notes to understand the changes it brings and the impact these
changes are going to have on your cloud users and workloads.
The application of the patch releases may not require the cluster nodes
reboot. Though, your cluster can contain nodes that require reboot after
the last update to a major release, and this requirement will remain after
update to any of the following patch releases. Therefore, Mirantis strongly
recommends that you determine if there are such nodes in your cluster
before you update to the next patch release and reboot them if any, as
described in Step 4. Reboot the nodes with optional instance migration.
For some MOSK versions, applying a patch release may require
restart of the containers that host the elements of the cloud data plane. In
case of Open vSwitch-based clusters, this may result in up to 5 minute downtime
of workload network connectivity for each compute node.
For MOSK prior to 24.1 series, you can determine whether
applying a patch release is going to require the restart of the data plane by
consulting the Release artifacts part of the release notes of the current
and target MOSK releases.
The data plane restart will only happen if there are new versions of the
container images related to the data plane.
It is possible to avoid the downtime for the cloud data by
explicitly pinning the image versions of the following components:
Open vSwitch
Kubernetes entrypoint
However, pinning these images will result in the cloud data plane not receiving
any security or bugfixes during the update.
To pin the images:
Depending on the proxy configuration, the image base URL differs.
To obtain the list of currently used images on the cluster, run:
Add the openvswitch and kubernetes-entrypoint images used on your
cluster:
Since MOSK 25.1
Create a ConfigMap in the openstack namespace with the following
content, replacing <OPENSTACKDEPLOYMENT-NAME> with the name of your
OpenStackDeployment custom resource:
Since the procedure above modifies the cluster configuration, a fresh backup
is required to restore the cluster in case further reconfigurations fail.
Note
Since Container Cloud 2.26.1 (patch Cluster releases 17.1.1 and
16.1.1), the update of Ubuntu packages with kernel minor version update
may apply in certain releases.
In this case, cordon-drain and reboot of machines does not apply
automatically, and all machines have the Reboot is required
notification after the cluster update. You can manually handle the reboot
of machines during a convenient maintenance window as described in
Perform a graceful reboot of a cluster.
Note
In non-HA StackLight deployments, the KubePodsCrashLooping alert
may temporarily be firing for the Grafana ReplicaSet. Such behavior is
expected in non-HA StackLight setups. For details, see
known issue 42463.
To prevent the issue, deploy StackLight in HA mode.
Note
If StackLight is enabled in HA mode, the
OpenSearchClusterStatusCritical alert may trigger during cluster update
when the next OpenSearch node restarts before shards from the previous node
finish assigning. For details, see [48581] OpenSearchClusterStatusCritical is firing during cluster update.
You can ignore this alert, it will disappear once update succeeds.