Replace a failed TF controller node

If one of the Tungsten Fabric (TF) controller nodes has failed, follow this procedure to replace it with a new node.

To replace a TF controller node:

Note

Pods that belong to the failed node can stay in the Terminating state.

  1. If a failed node has tfconfigdb=enabled or tfanalyticsdb=enabled, or both labels assigned to it, get and note down the IP addresses of the Cassandra pods that run on the node to be replaced:

    kubectl -n tf get pods -owide | grep 'tf-cassandra.*<FAILED-NODE-NAME>'
    
  2. Delete the failed TF controller node from the Kubernetes cluster:

    kubectl delete node <FAILED-TF-CONTROLLER-NODE-NAME>
    

    Note

    Once the failed node has been removed from the cluster, all pods that hanged in the Terminating state should be removed.

  3. Assign the TF labels for the new control plane node as per the table below using the following command:

    kubectl label node <NODE-NAME> <LABEL-KEY=LABEL-VALUE> ...
    
    Tungsten Fabric (TF) node roles

    Node role

    Description

    Kubernetes labels

    Minimal count

    TF control plane

    Hosts the TF control plane services such as database, messaging, api, svc, config.

    tfconfig=enabled
    tfcontrol=enabled
    tfwebui=enabled
    tfconfigdb=enabled

    3

    TF analytics

    Hosts the TF analytics services.

    tfanalytics=enabled
    tfanalyticsdb=enabled

    3

    TF vRouter

    Hosts the TF vRouter module and vRouter Agent.

    tfvrouter=enabled

    Varies

    TF vRouter DPDK Technical Preview

    Hosts the TF vRouter Agent in DPDK mode.

    tfvrouter-dpdk=enabled

    Varies

    Note

    TF supports only Kubernetes OpenStack workloads. Therefore, you should label OpenStack compute nodes with the tfvrouter=enabled label.

    Note

    Do not specify the openstack-gateway=enabled and openvswitch=enabled labels for the OpenStack deployments with TF as a networking back end.

  4. Once you label the new Kubernetes node, new pods start scheduling on the node. Though, pods that use Persistent Volume Claims are stuck in the Pending state as their volume claims stay bounded to the local volumes from the deleted node. To resolve the issue:

    1. Delete the PersistentVolumeClaim (PVC) bounded to the local volume from the failed node:

      kubectl -n tf delete pvc <PVC-BOUNDED-TO-NON-EXISTING-VOLUME>
      

      Note

      Clustered services that use PVC, such as Cassandra, Kafka, and ZooKeeper, start the replication process when new pods move to the Ready state.

    2. Check the PersistenceVolumes (PVs) claimed by the deleted PVCs. If a PV is stuck in the Released state, delete it manually:

      kubectl -n tf delete pv <PV>
      
    3. Delete the pod that is using the removed PVC:

      kubectl -n tf delete pod <POD-NAME>
      
  5. Verify that the pods have successfully started on the replaced controller node and stay in the Ready state.

  6. If the failed controller node had tfconfigdb=enabled or tfanalyticsdb=enabled, or both labels assigned to it, remove old Cassandra hosts from the config and analytics cluster configuration:

    1. Get the host ID of the removed Cassandra host using the pod IP addresses saved during Step 1:

      kubectl -n tf exec tf-cassandra-<config/analytics>-dc1-rack1-1 -c cassandra -- nodetool status
      
    2. Verify that the removed Cassandra node has the DN status that indicates that this node is currently offline.

    3. Remove the failed Cassandra host:

      kubectl -n tf exec tf-cassandra-<config/analytics>-dc1-rack1-1 -c cassandra -- nodetool removenode <HOST-ID>
      
  7. Delete terminated nodes from the TF configuration through the TF web UI:

    1. Log in to the TF web UI.

    2. Navigate to Configure > BGP Routers.

    3. Delete all terminated control nodes.

      Note

      You can manage nodes of other types from Configure > Nodes.