Delete a cluster machine using web UI

This section instructs you on how to scale down an existing management, regional, or managed cluster through the Mirantis Container Cloud web UI.

To delete a machine from a cluster using web UI:

  1. Carefully read the machine deletion precautions.

  2. Log in to the Container Cloud web UI with the m:kaas:namespace@operator or m:kaas:namespace@writer permissions.

  3. Log in to the host where your management cluster kubeconfig is located and where kubectl is installed.

  4. For the Equinix Metal and bare metal providers, ensure that the machine being deleted is not a Ceph Monitor. If it is, migrate the Ceph Monitor to keep the odd number quorum of Ceph Monitors after the machine deletion. For details, see Migrate a Ceph Monitor before machine replacement.

  5. If the machine is assigned to a machine pool, decrease replicas count of the pool as described in Change replicas count of a machine pool.

  6. Applicable only to managed clusters. Skip this step if your management cluster is upgraded to Container Cloud 2.17.0.

    If StackLight in HA mode is enabled and the deleted machine had the StackLight label, perform the following steps:

    1. Connect to the managed cluster as described in the steps 5-7 in Connect to a Mirantis Container Cloud cluster.

    2. Define the pods in the Pending state:

      kubectl get po -n stacklight | grep Pending
      

      Example of system response:

      opensearch-master-2             0/1       Pending       0       49s
      patroni-12-0                    0/3       Pending       0       51s
      patroni-13-0                    0/3       Pending       0       48s
      prometheus-alertmanager-1       0/1       Pending       0       47s
      prometheus-server-0             0/2       Pending       0       47s
      
    3. Verify that the reason for the pod Pending state is volume node affinity conflict:

      kubectl describe pod <POD_NAME> -n stacklight
      

      Example of system response:

      Events:
        Type     Reason            Age    From               Message
        ----     ------            ----   ----               -------
        Warning  FailedScheduling  6m53s  default-scheduler  0/6 nodes are available:
                                                             3 node(s) didn't match node selector,
                                                             3 node(s) had volume node affinity conflict.
        Warning  FailedScheduling  6m53s  default-scheduler  0/6 nodes are available:
                                                             3 node(s) didn't match node selector,
                                                             3 node(s) had volume node affinity conflict.
      
    4. Obtain the PVC of one of the pods:

      kubectl get pod <POD_NAME> -n stacklight -o=jsonpath='{range .spec.volumes[*]}{.persistentVolumeClaim}{"\n"}{end}'
      

      Example of system response:

      {"claimName":"opensearch-master-opensearch-master-2"}
      
    5. Remove the PVC using the obtained name. For example, for opensearch-master-opensearch-master-2:

      kubectl delete pvc opensearch-master-opensearch-master-2 -n stacklight
      
    6. Delete the pod:

      kubectl delete po <POD_NAME> -n stacklight
      
    7. Verify that a new pod is created and scheduled to the spare node. This may take some time. For example:

      kubectl get po opensearch-master-2 -n stacklight
      NAME                     READY   STATUS   RESTARTS   AGE
      opensearch-master-2   1/1     Running  0          7m1s
      
    8. Repeat the steps above for the remaining pods in the Pending state.