Patch releases aim to significantly shorten the cycle of CVE fixes delivery
onto your MOSK deployments to help you avoid cyber
threats and data breaches.
Your management bare-metal cluster obtains patch releases automatically
the same way as major releases. A new patch MOSK release
version becomes available through the MOSK management
console after the automatic upgrade of the management cluster.
It is not possible to update between the patch releases that belong to
different release series in one go. For example, you can update from
MOSK 23.1.1 to 23.1.2, but you cannot immediately update
from MOSK 23.1.x to 23.2.x because you need to update to
the major MOSK 23.2 release first.
Caution
If you delay the MOSK management upgrade and
schedule it at a later time as described in Schedule MOSK management updates,
make sure to schedule a longer maintenance window as the upgrade queue can
include several patch releases along with the major release upgrade.
Pre-update actions
Estimate the update impact
Read the Update notes part of the target MOSK
release notes to understand the changes it brings and the impact these
changes are going to have on your cloud users and workloads.
Determine if cluster nodes need to be rebooted
The application of the patch releases may not require the cluster nodes
reboot. Though, your cluster can contain nodes that require reboot after
the last update to a major release, and this requirement will remain after
update to any of the following patch releases. Therefore, Mirantis strongly
recommends that you determine if there are such nodes in your cluster
before you update to the next patch release and reboot them if any, as
described in Step 4. Reboot the nodes with optional instance migration.
Avoid network downtime for cloud workloads
For some MOSK versions, applying a patch release may require
restart of the containers that host the elements of the cloud data plane. In
case of Open vSwitch-based clusters, this may result in up to 5 minute downtime
of workload network connectivity for each compute node.
For MOSK prior to 24.1 series, you can determine whether
applying a patch release is going to require the restart of the data plane by
consulting the Release artifacts part of the release notes of the current
and target MOSK releases.
The data plane restart will only happen if there are new versions of the
container images related to the data plane.
It is possible to avoid the downtime for the cloud data by
explicitly pinning the image versions of the following components:
Open vSwitch
Kubernetes entrypoint
However, pinning these images will result in the cloud data plane not receiving
any security or bugfixes during the update.
To pin the images:
Depending on the proxy configuration, the image base URL differs.
To obtain the list of currently used images on the cluster, run:
Add the openvswitch and kubernetes-entrypoint images used on your
cluster:
Since MOSK 25.1
Create a ConfigMap in the openstack namespace with the following
content, replacing <OPENSTACKDEPLOYMENT-NAME> with the name of your
OpenStackDeployment custom resource:
Available since MOSK 24.3.5. In the [maintenance]
section of the rockoon-config ConfigMap, disable
automated_openvswitch_restart by setting it to false:
While automated_openvswitch_restart is disabled, the Open
vSwitch configuration changes do not apply. Therefore, re-enable this option
after the update is complete.
Caution
You must unpin images and re-enable
automated_openvswitch_restart before updating to a major release.
Update a patch Cluster release of a MOSK cluster
Select from the following options:
Recommended since MOSK 24.2 and available as the only
supported option since the management cluster update to Container Cloud
2.30.0 (Cluster release 20.0.0):
If Upgrade does not display, your cluster is up-to-date.
In the Release Update window, select the required patch
Cluster release to update your cluster to.
The Description section contains the list of components
versions to be installed with a new Cluster release.
Click Update.
To view the update status, verify the cluster status on the
Clusters page. Once the orange blinking dot near the
cluster name disappears, the update is complete.
Since the procedure above modifies the cluster configuration, a fresh backup
is required to restore the cluster in case further reconfigurations fail.
Important
Because the MKE restoration process is complicated, we strongly
recommend contacting Mirantis support for assistance.
If you still decide to restore MKE from a backup on your own, you must
scale down helm-controller on the cluster being restored if the
MKE version of the affected cluster after the restore will differ from
the MKE version in the ClusterRelease object that is set in
MOSK Cluster objects in the management cluster:
If you are restoring MKE on a management cluster: before starting the
restore, scale down helm-controller on each affected MOSK cluster.
This prevents unintended Ceph and OpenStack downgrades on MOSK clusters
after the management cluster is restored.
If you are restoring MKE on a MOSK cluster: immediately after the restore
completes, scale down helm-controller. Because the restore rolls the
cluster back to an older release, this prevents it from triggering a
premature upgrade of Helm releases.
Note
Since Container Cloud 2.26.1 (patch Cluster releases 17.1.1 and
16.1.1), the update of Ubuntu packages with kernel minor version update
may apply in certain releases.
In this case, cordon-drain and reboot of machines does not apply
automatically, and all machines have the Reboot is required
notification after the cluster update. You can manually handle the reboot
of machines during a convenient maintenance window as described in
Perform a graceful reboot of a cluster.
Note
In non-HA StackLight deployments, the KubePodsCrashLooping alert
may temporarily be firing for the Grafana ReplicaSet. Such behavior is
expected in non-HA StackLight setups. For details, see
known issue 42463.
To prevent the issue, deploy StackLight in HA mode.
Note
If StackLight is enabled in HA mode, the
OpenSearchClusterStatusCritical alert may trigger during cluster update
when the next OpenSearch node restarts before shards from the previous node
finish assigning. For details, see [48581] OpenSearchClusterStatusCritical is firing during cluster update.
You can ignore this alert, it will disappear once update succeeds.