Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Now, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Update to a patch version¶
Patch releases aim to significantly shorten the cycle of CVE fixes delivery onto your MOSK deployments to help you avoid cyber threats and data breaches.
Your management bare-metal cluster obtains patch releases automatically the same way as major releases. A new patch MOSK release version becomes available through the MOSK management console after the automatic upgrade of the management cluster.
It is not possible to update between the patch releases that belong to different release series in one go. For example, you can update from MOSK 23.1.1 to 23.1.2, but you cannot immediately update from MOSK 23.1.x to 23.2.x because you need to update to the major MOSK 23.2 release first.
Caution
If you delay the MOSK management upgrade and schedule it at a later time as described in Schedule MOSK management updates, make sure to schedule a longer maintenance window as the upgrade queue can include several patch releases along with the major release upgrade.
Pre-update actions¶
Estimate the update impact¶
Read the Update notes part of the target MOSK release notes to understand the changes it brings and the impact these changes are going to have on your cloud users and workloads.
Determine if cluster nodes need to be rebooted¶
The application of the patch releases may not require the cluster nodes reboot. Though, your cluster can contain nodes that require reboot after the last update to a major release, and this requirement will remain after update to any of the following patch releases. Therefore, Mirantis strongly recommends that you determine if there are such nodes in your cluster before you update to the next patch release and reboot them if any, as described in Step 4. Reboot the nodes with optional instance migration.
Avoid network downtime for cloud workloads¶
For some MOSK versions, applying a patch release may require restart of the containers that host the elements of the cloud data plane. In case of Open vSwitch-based clusters, this may result in up to 5 minute downtime of workload network connectivity for each compute node.
The data plane restart will only happen if there are new versions of the container images related to the data plane.
It is possible to avoid the downtime for the cloud data by explicitly pinning the image versions of the following components:
Open vSwitch
Kubernetes entrypoint
However, pinning these images will result in the cloud data plane not receiving any security or bugfixes during the update.
To pin the images:
Depending on the proxy configuration, the image base URL differs. To obtain the list of currently used images on the cluster, run:
kubectl -n openstack get ds openvswitch-openvswitch-vswitchd-default -o yaml |grep "image:" | sort -u
Example of system response:
image: mirantis.azurecr.io/general/openvswitch:2.13-focal-20230211095312 image: mirantis.azurecr.io/openstack/extra/kubernetes-entrypoint:v1.0.1-48d1e8a-20220919122849
Add the
openvswitchandkubernetes-entrypointimages used on your cluster. Create a ConfigMap in theopenstacknamespace with the following content, replacing<OPENSTACKDEPLOYMENT-NAME>with the name of yourOpenStackDeploymentcustom resource:apiVersion: v1 kind: ConfigMap metadata: labels: openstack.lcm.mirantis.com/watch: "true" name: <OPENSTACKDEPLOYMENT-NAME>-artifacts namespace: openstack data: caracal: | dep_check: <KUBERNETES-ENTRYPOINT-IMAGE-URL> openvswitch_db_server: <OPENVSWITCH-IMAGE-URL> openvswitch_vswitchd: <OPENVSWITCH-IMAGE-URL>
Update a patch Cluster release of a MOSK cluster¶
For the procedure, see Granularly update MOSK using the management console.
Note
The update of Ubuntu packages with kernel minor version update may apply in certain releases. In this case, cordon-drain and reboot of machines does not apply automatically, and all machines have the Reboot is required notification after the cluster update. You can manually handle the reboot of machines during a convenient maintenance window as described in Perform a graceful reboot of a cluster.
Note
In non-HA StackLight deployments, the KubePodsCrashLooping alert
may temporarily be firing for the Grafana ReplicaSet. Such behavior is
expected in non-HA StackLight setups. The Grafana pod will resume normal
operation after PostgreSQL, which becomes temporarily unavailable during
updates in the non-HA StackLight setup, is restored.
To prevent the issue, deploy StackLight in HA mode.
Note
If StackLight is enabled in HA mode, the
OpenSearchClusterStatusCritical alert may trigger during cluster update
when the next OpenSearch node restarts before shards from the previous node
finish assigning. For details, see [48581] OpenSearchClusterStatusCritical is firing during cluster update.
You can ignore this alert, it will disappear once update succeeds.