Delete a machine

Delete a machine

This section instructs you on how to scale down an existing managed cluster through the Mirantis Container Cloud web UI.

Warning

An operational managed cluster deployment must contain a minimum of 3 Kubernetes manager nodes and 2 Kubernetes worker nodes. The deployment of the cluster does not start until the minimum number of nodes is created.

To meet the etcd quorum and to prevent the deployment failure, deletion of the manager nodes is prohibited.

A machine with the manager node role is automatically deleted during the managed cluster deletion.

Warning

If StackLight in HA mode is enabled and you are going to delete a machine with the StackLight label:

  • Make sure that at least 3 machines with the StackLight label will remain after the deletion. Otherwise, add an additional machine with such label before the deletion. After the deletion, perform the additional steps described below.

  • Do not delete more than 1 machine with the StackLight label. Since StackLight in HA mode uses local volumes bound to machines, the data from these volumes on the deleted machine will be purged but its replicas remain on other machines. Removal of more than 1 machine can cause data loss.

To delete a machine from a managed cluster:

  1. Log in to the Container Cloud web UI with the writer permissions.

  2. Switch to the required project using the Switch Project action icon located on top of the main left-side navigation panel.

  3. In the Clusters tab, click on the required cluster name to open the list of machines running on it.

  4. Click the More action icon in the last column of the machine you want to delete and select Delete. Confirm the deletion.

    Deleting a machine automatically frees up the resources allocated to this machine.

  5. If StackLight in HA mode is enabled and the deleted machine had the StackLight label, perform the following steps:

    1. Connect to the managed cluster as described in the steps 5-7 in Connect to a Mirantis Container Cloud cluster.

    2. Define the pods in the Pending state:

      kubectl get po -n stacklight | grep Pending
      

      Example of system response:

      elasticsearch-master-2          0/1       Pending       0       49s
      patroni-12-0                    0/3       Pending       0       51s
      patroni-13-0                    0/3       Pending       0       48s
      prometheus-alertmanager-1       0/1       Pending       0       47s
      prometheus-server-0             0/2       Pending       0       47s
      
    3. Verify that the reason for the pod Pending state is volume node affinity conflict:

      kubectl describe pod <POD_NAME> -n stacklight
      

      Example of system response:

      Events:
        Type     Reason            Age    From               Message
        ----     ------            ----   ----               -------
        Warning  FailedScheduling  6m53s  default-scheduler  0/6 nodes are available:
                                                             3 node(s) didn't match node selector,
                                                             3 node(s) had volume node affinity conflict.
        Warning  FailedScheduling  6m53s  default-scheduler  0/6 nodes are available:
                                                             3 node(s) didn't match node selector,
                                                             3 node(s) had volume node affinity conflict.
      
    4. Obtain the PVC of one of the pods:

      kubectl get pod <POD_NAME> -n stacklight -o=jsonpath='{range .spec.volumes[*]}{.persistentVolumeClaim}{"\n"}{end}'
      

      Example of system response:

      {"claimName":"elasticsearch-master-elasticsearch-master-2"}
      
    5. Remove the PVC using the obtained name. For example, for elasticsearch-master-elasticsearch-master-2:

      kubectl delete pvc elasticsearch-master-elasticsearch-master-2 -n stacklight
      
    6. Delete the pod:

      kubectl delete po <POD_NAME> -n stacklight
      
    7. Verify that a new pod is created and scheduled to the spare node. This may take some time. For example:

      kubectl get po elasticsearch-master-2 -n stacklight
      NAME                     READY   STATUS   RESTARTS   AGE
      elasticsearch-master-2   1/1     Running  0          7m1s
      
    8. Repeat the steps above for the remaining pods in the Pending state.