This section describes the specific actions you as a Cloud Operator need to
complete to accurately plan and successfully perform your
Mirantis OpenStack for Kubernetes (MOSK) cluster update to the
version 23.1.
Consider this information as a supplement to the generic update procedure
published in Operations Guide: Update a MOSK cluster.
As part of the update to MOSK 23.1, Tungsten Fabric will
automatically get updated from version 2011 to version 21.4.
Note
For the compatibility matrix of the most recent MOSK
releases and their major components in conjunction with Container Cloud and
Cluster releases, refer to Release Compatibility Matrix.
The update to MOSK 23.1 does not include any
version-specific impact on the cluster. To start planning a maintenance window,
use the Operations Guide: Update a MOSK cluster standard procedure.
Before updating the cluster, be sure to review the potential issues that
may arise during the process and the recommended solutions to address
them, as outlined in Cluster update known issues.
If your Container Cloud management cluster has updated to 2.24.1, to avoid
the issue with waiting for the lcm-agent to update
the currentDistribution field during the cluster update to
MOSK 23.1, replace the baremetal-provider image
1.37.15 tag with 1.37.18:
Open the kaasrelease object for editing:
kubectleditkaasreleasekaas-2-24-1
Replace the 1.37.15 tag with 1.37.18 for the baremetal-provider
image:
Explicitly define the OIDCClaimDelimiter parameter¶
MOSK 23.1 introduces a new default value for the
OIDCClaimDelimiter parameter, which defines the delimiter to use when
setting multi-valued claims in the HTTP headers. See the MOSK 23.1 OpenStack
API Reference
for details.
Previously, the value of the OIDCClaimDelimiter parameter defaulted to
",". This value misaligned with the behavior expected by Keystone.
As a result, when creating federation mappings for Keystone, the cloud operator
was forced to write more complex rules. Therefore, in MOSK
22.4, Mirantis announced the change of the default value for the
OIDCClaimDelimiter parameter.
If your deployment is affected and you have not explicitly defined the
OIDCClaimDelimiter parameter, as Mirantis advised, after update to
MOSK 22.4 or 22.5, now would be a good time to do it.
Otherwise, you may encounter unforeseen consequences after the update to
MOSK 23.1.
Affected deployments
Proceed with the instruction below only if the following conditions are
true:
Keystone is set to use federation through the OpenID Connect protocol,
with Mirantis Container Cloud Keycloak in particular. The following
configuration is present in your OpenStackDeployment custom resource:
The new default value for the OIDCClaimDelimiter parameter
is ";". To find out whether your Keystone mappings will need
adjustment after changing the default value, set the parameter to
";" on your staging environment and verify the rules.
Verify that the KaaSCephCluster custom resource does not contain the
following entries. If they exist, remove them.
In the spec.cephClusterSpec section, the external section.
Caution
If the external section exists in the KaaSCephCluster
spec during upgrade to MOSK 23.1, it will cause Ceph
outage that leads to corruption of the Cinder volumes file system and
requires a lot of routine work to fix sectors with Cinder volumes
one-by-one after fixing Ceph outage.
Therefore, make sure that the external section is removed from the
KaaSCephCluster spec right before starting cluster upgrade.
In the spec.cephClusterSpec.rookConfig section, the ms_crc_data or
mscrcdata configuration key. After you remove the key, wait for
rook-ceph-mon pods to restart on the MOSK
cluster.
Caution
If the ms_crc_data key exists in the rookConfig section
of KaaSCephCluster during upgrade to MOSK 23.1,
it causes missing connection between Rook Operator and Ceph Monitors
during Ceph version upgrade leading to a stuck upgrade and requires
that you manually disable the ms_crc_data key for all Ceph Monitors.
Therefore, make sure that the ms_crc_data key is removed from the
KaaSCephCluster spec right before starting cluster upgrade.
Remove sensitive information from cluster configuration¶
The OpenStackDeploymentSecret custom resource has been deprecated in
MOSK 23.1. The fields that store confidential settings
in OpenStackDeploymentSecret and OpenStackDeployment custom resources
need to be migrated to the Kubernetes secrets.
Note
For the functionality deprecation and deletion schedule, refer to
osdplsecret-deprecation-note.
To ensure stability for production workloads, MOSK 23.1
changes the default value of RAM oversubscription on compute nodes to 1.0,
which is no oversubscription. In MOSK 22.5 and earlier,
the effective default value of RAM allocation ratio is 1.1.
This change will be applied only to the compute nodes added to the cloud
after update to MOSK 23.1. The effective RAM
oversubscription value for existing compute nodes will not automatically
change after updating to MOSK 23.1.
Use dynamic configuration for resource oversubscription¶
Since MOSK 23.1, the Compute service (OpenStack Nova)
enables you to control the resource oversubscription dynamically through
the placement API.
However, if your cloud already makes use of custom allocation ratios, the new
functionality will not become immediately available after update. Any compute
node configured with explicit values for the cpu_allocation_ratio,
disk_allocation_ratio, and ram_allocation_ratio configuration options
will continue to enforce those values in the placement service. Therefore, any
changes made through the placement API will be overridden by the values set in
those configuration options in the Compute service. To modify oversubscription,
you should adjust the values of these configuration options in the
OpenStackDeployment custom resource. This procedure should be performed
with caution as modifying these values may result in compute service restarts
and potential disruptions in the instance builds.
To enable the use of the new functionality, Mirantis recommends removing
explicit values for the cpu_allocation_ratio, disk_allocation_ratio,
and ram_allocation_ratio options from the OpenStackDeployment custom
resource. Instead, use the new configuration options as described in
Configuring initial resource oversubscription. Also, keep in mind that
the changes will only impact newly added compute nodes and will not be applied
to the existing ones.