OpenStack known issues

This section lists the OpenStack known issues with workarounds for the Mirantis OpenStack for Kubernetes release 21.4.


[6912] Octavia load balancers may not work properly with DVR

Limitation

When Neutron is deployed in the DVR mode, Octavia load balancers may not work correctly. The symptoms include both failure to properly balance traffic and failure to perform an amphora failover. For details, see DVR incompatibility with ARP announcements and VRRP.


[14678] Instance inaccessible through floating IP upon floating IP quick reuse

Fixed in MOS 21.5

When using a small floating network and the floating IP that was previously allocated to an instance and re-associated with another instance in a short period of time, the instance may be inaccessible. The Address Resolution Protocol (ARP) cache timeout on the infrastructure layer is typically set to 5 minutes.

As a workaround, set a shorter ARP cache timeout on the infrastructure side.


[16963] Ironic cannot provide nodes

Fixed in MOS 21.5

On deployments with OpenStack Victoria, Ironic may fail to provide nodes.

Workaround:

  1. In the OsDpl CR, set valid_interfaces to public,internal:

    spec:
      services:
        baremetal:
          ironic:
            values:
              conf:
                ironic:
                  service_catalog:
                    valid_interfaces: public,internal
    
  2. Trigger the OpenStack deployment to restart Ironic:

    kubectl apply -f openstackdeployment.yaml
    

    To monitor the status:

    kubectl -n openstack get pods
    kubectl -n openstack describe osdpl osh-dev
    

[16495] Failure to reschedule OpenStack deployment pods after a node recovery

Kubernetes does not reschedule OpenStack deployment pods after a node recovery.

As a workaround, delete all pods of the deployment:

for i in $(kubectl -n openstack get deployments |grep -v NAME | awk '{print $1}');
do
kubectl -n openstack rollout restart deployment/$i;
done

Once done, the pods will respawn automatically.


[16452] Failure to update the Octavia policy after policies removal

Fixed in MOS 21.6

The Octavia policy fails to be updated after policies removal from the OsDpl CR. The issue affects OpenStack Victoria.

As a workaround, restart the Octavia API pods:

kubectl -n openstack delete pod -l application=octavia,component=api

[16103] Glance client returns HTTPInternalServerError error

Fixed in MOS 21.6

When Glance is configured with the Cinder back end TechPreview, the Glance client may return the HTTPInternalServerError error while operating with volume. In this case, repeat the action again until it succeeds.