Update notes

This section describes the specific actions you as a cloud operator need to complete to accurately plan and successfully perform your Mirantis OpenStack for Kubernetes (MOSK) cluster to the version 22.3. Consider this information as a supplement to the generic update procedure published in Operations Guide: Update a MOSK cluster.

Additionally, read through the Cluster update known issues for the problems that are known to occur during update with recommended workarounds.

Features

Migrating secrets from OpenStackDeployment to OpenStackDeploymentSecret CR

The OpenStackDeploymentSecret custom resource replaced the fields in OpenStackDeployment customer resource that used to keep the cloud’s confidential settings. These include:

  • features:ssl

  • features:barbican:backends:vault:approle_role_id

  • features:barbican:backends:vault:approle_secret_id

After the update, migrate the fields mentioned above from OpenStackDeployment to OpenStackDeploymentSecret custom resource as follows:

  1. Create an OpenStackDeploymentSecret object with the same name as the OpenStackDeployment object.

  2. Set the fields in the OpenStackDeploymentSecret custom resource as required.

  3. Remove the related fields from the OpenStackDeployment custom resource.

Switching to built-in policies for OpenStack services

Switched all OpenStack components to built-in policies by default. If you have any custom policies defined through the features:policies structure in the OpenStackDeployment custom resource, some API calls may not work as usual. Therefore, after completing the update, revalidate all the custom access rules configured for your cloud.

Post-update actions

Validation of custom OpenStack policies

Revalidate all the custom OpenStack access rules configured through the features:policies structure in the OpenStackDeployment custom resource.

Manual restart of TF vRouter agent Pods

To complete the update of a cluster with Tungsten Fabric as a backend for networking, manually restart Tungsten Fabric vRouter agent Pods on all compute nodes.

Restart of a vRouter agent on a compute node will cause up to 30-60 seconds of networking downtime per instance hosted there. If downtime is unacceptable for some workloads, we recommend that you migrate them before restarting the vRouter Pods.

Warning

Under certain rare circumstances, the reload of the vRouter kernel module triggered by the restart of a vRouter agent can hang due to the inability to complete the drop_caches operation. Watch the status and logs of the vRouter agent being restarted and trigger the reboot of the node, if necessary.

To restart the vRouter Pods:

  1. Remove the vRouter pods one by one manually.

    Note

    Manual removal is required because vRouter pods use the OnDelete update strategy. vRouter pod restart causes networking downtime for workloads on the affected node. If it is not applicable for some workloads, migrate them before restarting the vRouter pods.

    kubectl -n tf delete pod <VROUTER-POD-NAME>
    
  2. Verify that all tf-vrouter-* pods have been updated:

    kubectl -n tf get ds | grep tf-vrouter
    

    The UP-TO-DATE and CURRENT fields must have the same values.

Changing the format of Keystone domain_specific configuration

Switch to the new format of domain_specific_configuration in the OpenStackDeployment object. For details, see Reference Architecture: Standard configuration.

Cluster nodes reboot

Reboot the cluster nodes to complete the update as described in Cluster update.