OpenStack known issues¶
This section lists the OpenStack known issues with workarounds for the Mirantis OpenStack for Kubernetes release 21.2.
[13422] Redis pods remain in Pending state and cause update failure
[6912] Octavia load balancers may not work properly with DVR
[13422] Redis pods remain in Pending state and cause update failure¶
During the MOS cluster update to Cluster release
6.14.0, some Redis pods may remain in the Pending
state and cause update
failure.
Workaround:
Scale the Redis deployment to
0
replicas:kubectl -n openstack-redis scale deployment rfs-openstack-redis --replicas=0
Wait for the pods removal.
Scale the Redis deployment back to
3
replicas:kubectl -n openstack-redis scale deployment rfs-openstack-redis --replicas=3
Obtain the list of replicas:
kubectl -n openstack-redis get replicaset
Example of system response:
NAME DESIRED CURRENT READY AGE os-redis-operator-redisoperator-6bd8455f8c 1 1 1 26h rfs-openstack-redis-568b8f6fcb 0 0 0 26h rfs-openstack-redis-798655cf9b 3 3 3 24h
Remove the ReplicaSet with
0
replicas:kubectl -n openstack-redis delete replicaset rfs-openstack-redis-568b8f6fcb
[12511] Kubernetes workers remain in Prepare state¶
During the MOS cluster update to Cluster release
6.14.0, Kubernetes nodes may get stuck in the Prepare
state. At the same
time, the LCM Controller logs may contain the following errors:
evicting pod "horizon-57f7ccff74-d469c"
error when evicting pod "horizon-57f7ccff74-d469c" (will retry after
5s): Cannot evict pod as it would violate the pod's disruption budget.
The workaround is to decrease the Pod Disruption Budget (PDB) limit for Horizon by executing the following command on the managed cluster:
kubectl -n openstack patch pdb horizon -p='{"spec": {"minAvailable": 1}}'
[13273] Octavia amphora may get stuck after cluster update¶
After the MOS cluster update, Octavia amphora may get stuck with the WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying. error message present in the Octavia worker logs. The workaround is to manually switch the Octavia amphorae driver from V2 to V1.
Workaround:
In the OsDpl CR, specify the following configuration:
spec: services: load-balancer: octavia: values: conf: octavia: api_settings: default_provider_driver: amphora
Trigger the OpenStack deployment to restart Octavia:
kubectl apply -f openstackdeployment.yaml
To monitor the status:
kubectl -n openstack get pods kubectl -n openstack describe osdpl osh-dev
[6912] Octavia load balancers may not work properly with DVR¶
Limitation
When Neutron is deployed in the DVR mode, Octavia load balancers may not work correctly. The symptoms include both failure to properly balance traffic and failure to perform an amphora failover. For details, see DVR incompatibility with ARP announcements and VRRP.