Tungsten Fabric known issues and limitations¶
This section lists the Tungsten Fabric known issues with workarounds for the Mirantis OpenStack for Kubernetes release 21.3.
[10096] tf-control does not refresh IP addresses of Cassandra pods
[13755] TF pods switch to CrashLoopBackOff after a simultaneous reboot
Limitations¶
Tungsten Fabric does not provide the following functionality:
Automatic generation of network port records in DNSaaS (Designate) as Neutron with Tungsten Fabric as a backend is not integrated with DNSaaS. As a workaround, you can use the Tungsten Fabric built-in DNS service that enables virtual machines to resolve each other names.
Secret management (Barbican). You cannot use the certificates stored in Barbican to terminate HTTPs in a load balancer.
Role Based Access Control (RBAC) for Neutron objects.
Modification of custom vRouter DaemonSets based on the SR-IOV definition in the
OsDpl
CR.
[10096] tf-control does not refresh IP addresses of Cassandra pods¶
The tf-control
service resolves the DNS names of Cassandra pods at startup
and does not update them if Cassandra pods got new IP addresses, for example,
in case of a restart. As a workaround, to refresh the IP addresses of
Cassandra pods, restart the tf-control
pods one by one:
Caution
Before restarting the tf-control
pods:
Verify that the new pods are successfully spawned.
Verify that no vRouters are connected to only one
tf-control
pod that will be restarted.
kubectl -n tf delete pod tf-control-<hash>
[13755] TF pods switch to CrashLoopBackOff after a simultaneous reboot¶
Rebooting all Cassandra cluster TFConfig or TFAnalytics nodes, maintenance, or other circumstances that cause the Cassandra pods to start simultaneously may cause a broken Cassandra TFConfig and/or TFAnalytics cluster. In this case, Cassandra nodes do not join the ring and do not update the IPs of the neighbor nodes. As a result, the TF services cannot operate Cassandra cluster(s).
To verify that a Cassandra cluster is affected:
Run the nodetool status command specifying the config
or
analytics
cluster and the replica number:
kubectl -n tf exec -it tf-cassandra-<config/analytics>-dc1-rack1-<replica number> -c cassandra -- nodetool status
Example of system response with outdated IP addresses:
Datacenter: DC1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
DN <outdated ip> ? 256 64.9% a58343d0-1e3f-4d54-bcdf-9b9b949ca873 r1
DN <outdated ip> ? 256 69.8% 67f1d07c-8b13-4482-a2f1-77fa34e90d48 r1
Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN <actual ip> 3.84 GiB 256 65.2% 7324ebc4-577a-425f-b3de-96faac95a331 rack1
Workaround:
Manually delete a Cassandra pod from the failed config
or analytics
cluster to re-initiate the bootstrap process for one of the Cassandra nodes:
kubectl -n tf delete pod tf-cassandra-<config/analytics>-dc1-rack1-<replica number>