Update notes

This section describes the specific actions you as a Cloud Operator need to complete to accurately plan and successfully perform your Mirantis OpenStack for Kubernetes (MOSK) cluster update to the version 22.5. Consider this information as a supplement to the generic update procedure published in Operations Guide: Update a MOSK cluster.

Additionally, read through the Cluster update known issues for the problems that are known to occur during update with recommended workarounds.

Features

The MOSK cluster will obtain the newly implemented capabilities automatically with no significant impact on the update procedure.

Update impact and maintenance windows planning

The update to MOSK 22.5 does not include any version-specific impact on the cluster. To start planning a maintenance window, use the Operations Guide: Update a MOSK cluster standard procedure.

Pre-update actions

Before you proceed with updating the cluster, make sure that you perform the following pre-update actions if applicable:

  • Due to the [29438] Cluster update gets stuck during the Tungsten Fabric operator update known issue, the MOSK cluster update from 22.4 to 22.5 can get stuck. Your cluster is affected if it has been updated from MOSK 22.3 to 22.4, regardless of the SDN back end in use (Open vSwitch or Tungsten Fabric). The newly deployed MOSK 22.4 clusters are not affected.

    To avoid the issue, manually delete the tungstenfabric-operator-metrics service from the cluster before update:

    kubectl -n tf delete svc tungstenfabric-operator-metrics
    
  • Due to the known issue in the database auto-cleanup job for the Block Storage service (OpenStack Cinder), the state of volumes that are attached to instances gets reset every time the job runs. The workaround is to temporarily disable the job until the issue is fixed. For details, refer to [29501] Cinder periodic database cleanup resets the state of volumes.

Post-update actions

Explicitly define the OIDCClaimDelimiter parameter

The OIDCClaimDelimiter parameter defines the delimiter to use when setting multi-valued claims in the HTTP headers. See the MOSK 22.5 OpenStack API Reference for details.

The current default value of the OIDCClaimDelimiter parameter is ",". This value misaligns with the behavior expected by Keystone. As a result, when creating federation mappings for Keystone, the cloud operator may be forced to write more complex rules. Therefore, in early 2023, Mirantis will change the default value for the OIDCClaimDelimiter parameter.

Affected deployments

Proceed with the instruction below only if the following conditions are true:

  • Keystone is set to use federation through the OpenID Connect protocol, with Mirantis Container Cloud Keycloak in particular. The following configuration is present in your OpenStackDeployment custom resource:

    kind: OpenStackDeployment
    spec:
      features:
        keystone:
          keycloak:
            enabled: true
    
  • No value has already been specified for the OIDCClaimDelimiter parameter in your OpenStackDeployment custom resource.

To facilitate smooth transition of the existing deployments to the new default value, explicitly define the OIDCClaimDelimiter parameter as follows:

kind: OpenStackDeployment
spec:
  features:
    keystone:
      keycloak:
        oidc:
          OIDCClaimDelimiter: ","

Note

The new default value for the OIDCClaimDelimiter parameter will be ";". To find out whether your Keystone mappings will need adjustment after changing the default value, set the parameter to ";" on your staging environment and verify the rules.

Optional. Set externalTrafficPolicy=Local for the OpenStack Ingress service

In MOSK 22.4 and older versions, the OpenStack Ingress service was not accessible through its LB IP address on the environments having the external network restricted to a few nodes in the MOSK cluster. For such use cases, Mirantis recommended setting the externalTrafficPolicy parameter to Cluster as a workaround.

The issue #24435 has been fixed in MOSK 22.5. Therefore, if the monitoring of source IPs of the requests to OpenStack services is required, you can set the externalTrafficPolicy parameter back to Local.

Affected deployments

You are affected if your deployment configuration matches the following conditions:

  • The external network is restricted to a few nodes in the MOSK cluster. In this case, only a limited set of nodes have IPs in the external network where MetalLB announces LB IPs.

  • The workaround was applied by setting externalTrafficPolicy=Cluster for the Ingress service.

To set externalTrafficPolicy back from Cluster to Local:

  1. On the MOSK cluster, add the node selector to the L2Advertisement MetalLB object so that it matches the nodes in the MOSK cluster having IPs in the external network, or a subset of those nodes.

    Example command to edit L2Advertisement:

    kubectl -n metallb-system edit l2advertisements
    

    Example of L2Advertisement.spec:

    spec:
      ipAddressPools:
      - services
      nodeSelectors:
      - matchLabels:
          openstack-control-plane: enabled
    

    The openstack-control-plane: enabled label selector defines nodes in the MOSK cluster having IPs in the external network.

  2. In the MOSK Cluster object located on the management cluster, remove or edit node selectors and affinity for MetalLB speaker in the MetalLB chart values, if required.

    Example of the helmReleases section in Cluster.spec after editing the nodeSelector parameter:

    helmReleases:
      - name: metallb
        values:
          configInline:
            address-pools: []
          speaker:
            nodeSelector:
              kubernetes.io/os: linux
            resources:
              limits:
                cpu: 100m
                memory: 500Mi
    

    The MetalLB speaker DaemonSet must have the same node selector as the OpenStack Ingress service DaemonSet.

    Note

    By default, the OpenStack Ingress service Pods run on all Linux cluster nodes.

  3. Change externalTrafficPolicy to Local for the OpenStack Ingress service.

    Example command to alter the Ingress object:

    kubectl -n openstack patch svc ingress -p '{"spec":{"externalTrafficPolicy":"Local"}}'
    
  4. Verify that OpenStack services are accessible through the load balancer IP of the OpenStack Ingress service.

Remove Panko from the deployment

The OpenStack Panko service has been removed from the product since MOSK 22.2 in OpenStack Victoria without the user being involved. See Deprecation Notes: The OpenStack Panko service for details.

Though, in MOSK 22.5, before upgrading to OpenStack Yoga, verify that you remove the Panko service from the cloud by removing the event entry from the spec:features:services structure in the OpenStackDeployment resource as described in Operations Guide: Remove an OpenStack service.