This section describes the specific actions you as a cloud operator need to complete to accurately plan and successfully perform your Mirantis OpenStack for Kubernetes (MOSK) cluster to the version 22.3. Consider this information as a supplement to the generic update procedure published in Operations Guide: Update a MOSK cluster.
Additionally, read through the Cluster update known issues for the problems that are known to occur during update with recommended workarounds.
Migrating secrets from OpenStackDeployment to OpenStackDeploymentSecret CR¶
OpenStackDeploymentSecret custom resource replaced the fields in
OpenStackDeployment customer resource that used to keep the cloud’s
confidential settings. These include:
For the functionality deprecation and deletion schedule, refer to OpenStackDeployment CR fields containing cloud secret parameters.
After the update, migrate the fields mentioned above from OpenStackDeployment to OpenStackDeploymentSecret custom resource as follows:
OpenStackDeploymentSecretobject with the same name as the
Set the fields in the
OpenStackDeploymentSecretcustom resource as required. See OpenStackDeploymentSecret custom resource for details.
Remove the related fields from the
Switching to built-in policies for OpenStack services¶
Switched all OpenStack components to built-in policies by default. If you have
any custom policies defined through the features:policies structure in
OpenStackDeployment custom resource, some API calls may not work as
usual. Therefore, after completing the update, revalidate all the custom
access rules configured for your cloud.
Validation of custom OpenStack policies¶
Revalidate all the custom OpenStack access rules configured through the
features:policies structure in the
Manual restart of TF vRouter agent Pods¶
To complete the update of a cluster with Tungsten Fabric as a back end for networking, manually restart Tungsten Fabric vRouter agent Pods on all compute nodes.
Restart of a vRouter agent on a compute node will cause up to 30-60 seconds of networking downtime per instance hosted there. If downtime is unacceptable for some workloads, we recommend that you migrate them before restarting the vRouter Pods.
Under certain rare circumstances, the reload of the vRouter kernel
module triggered by the restart of a vRouter agent can hang due to
the inability to complete the
drop_caches operation. Watch the status
and logs of the vRouter agent being restarted and trigger the reboot of
the node, if necessary.
To restart the vRouter Pods:
Since MOSK 22.4, the post-update restart of the TF vRouter pods has been implemented. Therefore, if the target update version of your deployment is MOSK 22.4 or newer, skip this step.
Remove the vRouter pods one by one manually.
Manual removal is required because vRouter pods use the
OnDeleteupdate strategy. vRouter pod restart causes networking downtime for workloads on the affected node. If it is not applicable for some workloads, migrate them before restarting the vRouter pods.
kubectl -n tf delete pod <VROUTER-POD-NAME>
Verify that all
tf-vrouter-*pods have been updated:
kubectl -n tf get ds | grep tf-vrouter
CURRENTfields must have the same values.
Changing the format of Keystone domain_specific configuration¶
Switch to the new format of
domain_specific_configuration in the
OpenStackDeployment object. For details, see Reference Architecture:
Cluster nodes reboot¶
Reboot the cluster nodes to complete the update as described in Update a MOSK cluster.