Cluster update known issues

This section lists the cluster update known issues with workarounds for the Mirantis OpenStack for Kubernetes release 22.2.


[22777] Admission Controller exception for deployments with Tungsten Fabric

Affects only MOSK 22.2

After updating the MOSK cluster, Admission Controller prohibits the OsDpl update with the following error message:

TungstenFabric as network backend and setting of floating network
physnet name without network type and segmentation id are not compatible.

As a workaround, after the update remove the orphaned physnet parameter from the OsDpl CR:

features:
  neutron:
    backend: tungstenfabric
    floating_network:
      enabled: true
      physnet: physnet1

[21790] Ceph cluster fails to update due to ‘csi-rbdplugin’ not found

Fixed in MOSK 22.3

A Ceph cluster fails to update on a managed cluster with the following message:

Failed to configure Ceph cluster: ceph cluster verification is failed:
[Daemonset csi-rbdplugin is not found]

As a workaround, restart the rook-ceph-operator pod:

kubectl -n rook-ceph scale deploy rook-ceph-operator --replicas 0
kubectl -n rook-ceph scale deploy rook-ceph-operator --replicas 1

[23154] Ceph health is in ‘HEALTH_WARN’ state after managed cluster update

After updating the MOSK cluster, Ceph health is in the HEALTH_WARN state with the SLOW_OPS health message. The workaround is to restart the affected Ceph Monitors.


[23771] Connectivity loss due to wrong update order of Neutron services

Fixed in MOSK 22.3

After updating the cluster, simultaneous unordered restart of Neutron L2 and L3, DHCP, and Metadata services leads to the state when ports on br-int are tagged with valid VLAN tags but with trunks: [4095].

Example of affected ports in Open vSwitch:

Port "tapdb11212e-15"
    tag: 1
    trunks: [4095]

Workaround:

  1. Search for the nodes with the OVS ports:

    for i in $(kubectl -n openstack get pods |grep openvswitch-vswitchd | awk '{print $1}'); do echo $i; kubectl -n openstack exec -it -c openvswitch-vswitchd $i -- ovs-vsctl show |grep trunks|head -1; done
    
  2. Exec into the openvswitch-vswitchd pod with affected ports obtained in the previous step and run:

    for i in $(ovs-vsctl show |grep trunks -B 3 |grep Port | awk '{print $2}' | tr -d '"'); do ovs-vsctl set port $i tag=4095; done
    
  3. Restart the neutron-ovs agent on the affected nodes.

[24435] MetalLB speaker fails to announce the LB IP for the Ingress service

Fixed in MOSK 22.5

After updating the MOSK cluster, MetalLB speaker may fail to announce the Load Balancer (LB) IP address for the OpenStack Ingress service. As a result, the OpenStack Ingress service is not accessible using its LB IP address.

The issue may occur if the MetalLB speaker nodeSelector selects not all the nodes selected by nodeSelector of the OpenStack Ingress service.

The issue may arise and disappear when a new MetalLB speaker is being selected by the MetalLB Controller to announce the LB IP address.

The issue occurs since MOSK 22.2 after externalTrafficPolicy was set to local for the OpenStack Ingress service.

Workaround:

Select from the following options:

  • Set externalTrafficPolicy to cluster for the OpenStack Ingress service.

    This option is preferable in the following cases:

    • If not all cluster nodes have connection to the external network

    • If the connection to the external network cannot be established

    • If network configuration changes are not desired

  • If network configuration is allowed and if you require the externalTrafficPolicy: local option:

    1. Wire the external network to all cluster nodes where the OpenStack Ingress service Pods are running.

    2. Configure IP addresses in the external network on the nodes and change the default routes on the nodes.

    3. Change nodeSelector of MetalLB speaker to match nodeSelector of the OpenStack Ingress service.