Track OpenSDN node maintenance progress¶
The Tungsten Fabric (TF) Operator monitors maintenance progress for both cluster-wide and node-level operations. The operator automatically calculates progress and displays it as a percentage based on the number of nodes that have successfully completed the maintenance lifecycle.
Maintenance progress is reported through two custom resources:
ClusterWorkloadLock and NodeWorkloadLock. To monitor maintenance
progress:
Identify the
ClusterWorkloadLockCR name for the TF Operator:Note
The TF Operator
ClusterWorkloadLockfollows the naming patterntf-<TFOPERATOR-CR-NAME>.kubectl get tfoperators.tf.mirantis.com -n tf
Inspect the
ClusterWorkloadLock:kubectl get clusterworkloadlock tf-<TFOPERATOR-CR-NAME> -o yaml
Example progress output:
apiVersion: lcm.mirantis.com/v1alpha1 kind: ClusterWorkloadLock metadata: creationTimestamp: "2026-01-16T15:25:37Z" generation: 1 name: tf-openstack-tf resourceVersion: "4129207" uid: 586470b9-3271-498a-8d9b-f7ea391f81fe spec: controllerName: tungstenfabric status: inactivationProgress: description: 1 out of 5 nodes updated percentage: 20 state: inactive
Monitor the node-level status by viewing the progress of all nodes or filtering for TF-specific nodes:
List all
NodeWorkloadLock:kubectl get nodeworkloadlocks -o wide
Filter for TFOperator
NodeWorkloadLockspecifically:kubectl get nodeworkloadlocks -o json | \ jq '.items[] | select(.spec.controllerName == "tungstenfabric") | { name: .metadata.name, node: .spec.nodeName, state: .status.state, progress: .status.inactivationProgress }'
Example progress output:
{ "name": "tf-gz-ps-5s23xnyyb7lt-0-oh6icrboowqk-server-dr2oz6gdsrve", "node": "gz-ps-5s23xnyyb7lt-0-oh6icrboowqk-server-dr2oz6gdsrve", "state": "active", "progress": { "description": "Node update in progress" } } { "name": "tf-gz-ps-5s23xnyyb7lt-1-wfg3i35ter7j-server-ielrmii34cub", "node": "gz-ps-5s23xnyyb7lt-1-wfg3i35ter7j-server-ielrmii34cub", "state": "active", "progress": { "description": "Node update in progress" } } { "name": "tf-gz-ws-p5oig4blyobx-0-fx3z5ehhhl42-server-t6c6sqb2iqve", "node": "gz-ws-p5oig4blyobx-0-fx3z5ehhhl42-server-t6c6sqb2iqve", "state": "active", "progress": { "description": "Node update in progress" } } { "name": "tf-gz-ws-p5oig4blyobx-1-wc5uf3bntpco-server-swbomokc7bur", "node": "gz-ws-p5oig4blyobx-1-wc5uf3bntpco-server-swbomokc7bur", "state": "inactive", "progress": { "description": "Node update completed", "percentage": 100 } } { "name": "tf-gz-ws-p5oig4blyobx-2-cbpddrvbfujl-server-wewvvquki3ls", "node": "gz-ws-p5oig4blyobx-2-cbpddrvbfujl-server-wewvvquki3ls", "state": "active", "progress": { "description": "Node update in progress" } }
Troubleshoot the node status if necessary. If a node remains in the
Failedstatus during maintenance, investigate the pods currently running on that secific node to identify workloads that have failed to terminate:kubectl get pods -n tf -o wide --field-selector spec.nodeName=<NODE-NAME>
See also