Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
OpenStack known issues¶
This section lists the OpenStack known issues with workarounds for the Mirantis OpenStack for Kubernetes release 22.3.
[26278] ‘l3-agent’ gets stuck during Neutron restart¶
During l3-agent restart, routers may not be initialized properly due to
erroneous logic in Neutron code causing l3-agent to get stuck in the
Not ready state. The readiness probe states that one of routers is not
ready with the keepalived process not started.
Example output of the kubectl -n openstack describe pod <neutron-l3 agent pod name> command:
Warning Unhealthy 109s (x476 over 120m) kubelet, ti-rs-nhmrmiuyqzxl-2-2obcnor6vt24-server-tmtr5ajqjflf \
Readiness probe failed: /tmp/health-probe.py:259: \
ERROR:/tmp/health-probe.py:The router: 66a885b7-0c7c-463a-a574-bdb19733baf3 is not initialized.
Workaround:
Remove the router from
l3-agent:neutron l3-agent-router-remove <router-name> <l3-agent-name>
Wait up to one minute.
Add the router back to
l3-agent:neutron l3-agent-router-add <router-name> <l3-agent-name>
[22930] Octavia load balancers provisioning gets stuck¶
Octavia load balancers provisioning_status may get stuck in the
ERROR, PENDING_UPDATE, PENDING_CREATE, or PENDING_DELETE state.
Occasionally, the listeners or pools associated with these load balancers may
also get stuck in the same state.
Workaround:
For administrative users that have access to the keystone-client pod:
Log in to a
keystone-clientpod.Delete the affected load balancer:
openstack loadbalancer delete <load_balancer_id> --force
For non-administrative users, access the Octavia API directly and delete the affected load balancer using the
"force": trueargument in the delete request:Access the Octavia API.
Obtain the token:
TOKEN=$(openstack token issue -f value -c id)
Obtain the endpoint:
ENDPOINT=$(openstack version show --service load-balancer --interface public --status CURRENT -f value -c Endpoint)
Delete the affected load balancers:
curl -H "X-Auth-Token: $TOKEN" -d '{"force": true}' -X DELETE $ENDPOINT/loadbalancers/<load_balancer_id>