Tungsten Fabric known issues and limitations


Limitations

  • Tungsten Fabric is not monitored by StackLight

  • Tungsten Fabric does not provide the following functionality:

    • Automatic generation of network port records in DNSaaS (Designate) as Neutron with Tungsten Fabric as a back end is not integrated with DNSaaS. As a workaround, you can use the Tungsten Fabric built-in DNS service that enables virtual machines to resolve each other names.

    • Secret management (Barbican). You cannot use the certificates stored in Barbican to terminate HTTPs in a load balancer.

    • Role Based Access Control (RBAC) for Neutron objects.


[8469] Load balancer port always has default security group

Fixed in MOS Ussuri Update

Octavia always enables a default security group for a newly created load balancer causing the issue with the load balancer accessibility. To workaround the issue, select one of the following options:

  • Add the required rules to the default security group.

  • Delete the security group through the Tungsten Fabric web UI:

    1. Navigate to Configure > Networking > Ports.

    2. Remove the security group from the non-VIP ports. The VIP port has neutron:LOADBALANCER in the Device column.


[8293] Error messages on attempts to use loggers

Fixed in MOS Ussuri Update

The HAProxy service, which is used as a back end for load balancers in Tungsten Fabric, uses non-existing socket files from the log collection service. This error in the configuration causes the logging of error messages on attempts to use loggers in contrail-lbaas-haproxy-stdout.log. The issue does not affect the service operability.


[10096] tf-control service does not refresh IP addresses of Cassandra pods

The tf-control service resolves the DNS names of Cassandra pods at startup and does not update them if Cassandra pods got new IP addresses, for example, in case of a restart. As a workaround, to refresh the IP addresses of Cassandra pods, restart the tf-control pods one by one:

Caution

Before restarting the tf-control pods:

  • Verify that the new pods are successfully spawned.

  • Verify that no vRouters are connected to only one tf-control pod that will be restarted.

kubectl -n tf delete pod tf-control-<hash>