Tungsten Fabric known issues and limitations¶
This section lists the Tungsten Fabric known issues with workarounds for the Mirantis OpenStack for Kubernetes release 21.4.
[10096] tf-control does not refresh IP addresses of Cassandra pods
[13755] TF pods switch to CrashLoopBackOff after a simultaneous reboot
Limitations¶
Tungsten Fabric does not provide the following functionality:
Automatic generation of network port records in DNSaaS (Designate) as Neutron with Tungsten Fabric as a backend is not integrated with DNSaaS. As a workaround, you can use the Tungsten Fabric built-in DNS service that enables virtual machines to resolve each other names.
Secret management (Barbican). You cannot use the certificates stored in Barbican to terminate HTTPs in a load balancer.
Role Based Access Control (RBAC) for Neutron objects.
Modification of custom vRouter DaemonSets based on the SR-IOV definition in the
OsDpl
CR.
[10096] tf-control does not refresh IP addresses of Cassandra pods¶
The tf-control
service resolves the DNS names of Cassandra pods at startup
and does not update them if Cassandra pods got new IP addresses, for example,
in case of a restart. As a workaround, to refresh the IP addresses of
Cassandra pods, restart the tf-control
pods one by one:
Caution
Before restarting the tf-control
pods:
Verify that the new pods are successfully spawned.
Verify that no vRouters are connected to only one
tf-control
pod that will be restarted.
kubectl -n tf delete pod tf-control-<hash>
[13755] TF pods switch to CrashLoopBackOff after a simultaneous reboot¶
Rebooting all Cassandra cluster TFConfig or TFAnalytics nodes, maintenance, or other circumstances that cause the Cassandra pods to start simultaneously may cause a broken Cassandra TFConfig and/or TFAnalytics cluster. In this case, Cassandra nodes do not join the ring and do not update the IPs of the neighbor nodes. As a result, the TF services cannot operate Cassandra cluster(s).
To verify that a Cassandra cluster is affected:
Run the nodetool status command specifying the config
or
analytics
cluster and the replica number:
kubectl -n tf exec -it tf-cassandra-<config/analytics>-dc1-rack1-<replica number> -c cassandra -- nodetool status
Example of system response with outdated IP addresses:
Datacenter: DC1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
DN <outdated ip> ? 256 64.9% a58343d0-1e3f-4d54-bcdf-9b9b949ca873 r1
DN <outdated ip> ? 256 69.8% 67f1d07c-8b13-4482-a2f1-77fa34e90d48 r1
Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN <actual ip> 3.84 GiB 256 65.2% 7324ebc4-577a-425f-b3de-96faac95a331 rack1
Workaround:
Manually delete a Cassandra pod from the failed config
or analytics
cluster to re-initiate the bootstrap process for one of the Cassandra nodes:
kubectl -n tf delete pod tf-cassandra-<config/analytics>-dc1-rack1-<replica number>
[16241] Failure to update instance port through Horizon¶
Fixed in MOS 21.5
Updating a port or security group assigned to the port through the Horizon web UI fails with an error.
Workaround:
Update the port through the Tungsten Fabric web UI:
Log in to the Tungsten Fabric web UI.
Navigate to Configure > Networking > Ports.
Click the gear icon next to the required port and click Edit.
Change the parameters as required and click Save.
Update the port through CLI:
Log in to the
keystone-client
pod.Run the openstack port set with the required parameters. For example:
openstack port set 48f7dfce-9111-4951-a2d3-95a63c94b64e --name port-name-changed