Delete a cluster machine using CLI¶
Available since MOSK 23.3
This section instructs you on how to scale down an existing management or managed cluster through the Container Cloud API. To delete a machine using the Container Cloud web UI, see Delete a cluster machine using web UI.
Using the Container Cloud API, you can delete a cluster machine using the following methods:
Recommended. Enable the
delete
field in theproviderSpec
section of the requiredMachine
object. It allows aborting graceful machine deletion before the node is removed from Docker Swarm.Not recommended. Apply the
delete
request to theMachine
object.
You can control machine deletion steps by following a specific machine deletion policy.
Overview of machine deletion policies¶
The deletion policy of the Machine
resource used in the Container Cloud
API defines specific steps occurring before a machine deletion.
The Container Cloud API contains the following types of deletion policies: graceful, unsafe, forced. By default, the graceful deletion policy is used.
You can change the deletion policy before the machine deletion. If the deletion process has already started, you can reduce the deletion policy restrictions in the following order only: graceful > unsafe > forced.
Graceful machine deletion¶
Recommended
During a graceful machine deletion, the provider and LCM controllers perform the following steps:
Cordon and drain the node being deleted.
Remove the node from Docker Swarm.
Send the
delete
request to the correspondingMachine
resource.Remove the provider resources such as the VM instance, network, volume, and so on. Remove the related Kubernetes resources.
Remove the finalizer from the
Machine
resource. This step completes the machine deletion from Kubernetes resources.
Caution
You can abort a graceful machine deletion only before the corresponding node is removed from Docker Swarm.
During a graceful machine deletion, the Machine
object status displays
prepareDeletionPhase
with the following possible values:
Unsafe machine deletion¶
During an unsafe machine deletion, the provider and LCM controllers perform the following steps:
Send the
delete
request to the correspondingMachine
resource.Remove the provider resources such as the VM instance, network, volume, and so on. Remove the related Kubernetes resources.
Remove the finalizer from the
Machine
resource. This step completes the machine deletion from Kubernetes resources.
Forced machine deletion¶
During a forced machine deletion, the provider and LCM controllers perform the following steps:
Send the
delete
request to the correspondingMachine
resource.Remove the provider resources such as the VM instance, network, volume, and so on. Remove the related Kubernetes resources.
Remove the finalizer from the
Machine
resource. This step completes the machine deletion from Kubernetes resources.
This policy type allows deleting a Machine
resource even if the provider or
LCM controller gets stuck at some step. But this policy may require a manual
cleanup of machine resources in case of a controller failure. For details, see
Delete a machine from a cluster using CLI.
Caution
Consider the following precautions applied to the forced machine deletion policy:
Use the forced machine deletion only if either graceful or unsafe machine deletion fails.
If the forced machine deletion fails at any step, the LCM Controller removes the finalizer anyway.
Before starting the forced machine deletion, back up the related
Machine
resource:kubectl get machine -n <projectName> <machineName> -o json > deleted_machine.json
Delete a machine from a cluster using CLI¶
Carefully read the machine deletion precautions.
Log in to the host where your management cluster
kubeconfig
is located and where kubectl is installed.For the bare metal provider, ensure that the machine being deleted is not a Ceph Monitor. If it is, migrate the Ceph Monitor to keep the odd number quorum of Ceph Monitors after the machine deletion. For details, see Container Cloud documentation: Ceph operations - Migrate a Ceph Monitor before machine replacement.
If the machine is assigned to a machine pool, decrease replicas count of the pool as described in Container Cloud documentation: Operate machine pools - Operate machine pools.
Select from the following options:
Recommended. In the
providerSpec.value
section of theMachine
object, setdelete
totrue
:kubectl patch machines.cluster.k8s.io -n <projectName> <machineName> --type=merge -p '{"spec":{"providerSpec":{"value":{"delete":true}}}}'
Replace the parameters enclosed in angle brackets with the corresponding values.
Delete the
Machine
object.kubectl delete machines.cluster.k8s.io -n <projectName> <machineName>
After a successful
unsafe
orgraceful
machine deletion, the resources allocated to the machine are automatically freed up.If you applied the
forced
machine deletion, verify that all machine resources are freed up. Otherwise, manually clean up resources:Delete the Kubernetes
Node
object related to the deletedMachine
object:Note
Since MOSK 23.1, skip this step as the system performs it automatically.
Log in to the host where your managed cluster
kubeconfig
is located.Verify whether the
Node
object for the deletedMachine
object still exists:kubectl get node $(jq -r '.status.nodeRef.name' deleted_machine.json)
If the system response is positive:
Log in to the host where your management cluster
kubeconfig
is located.Delete the
LcmMachine
object with same name and project name as the deletedMachine
object.kubectl delete lcmmachines.lcm.mirantis.com -n <projectName> <machineName>
Clean up the provider resources:
Log in to the host that contains the management cluster
kubeconfig
and jq installed.If the deleted machine was located on the managed cluster, delete the Ceph node as described in High-level workflow of Ceph OSD or node removal.
Obtain the
BareMetalHost
object that relates to the deleted machine:BMH=$(jq -r '.metadata.annotations."metal3.io/BareMetalHost"| split("/") | .[1]' deleted_machine.json)
Delete the
BareMetalHost
credentials:kubectl delete secret -n <projectName> <machineName>-user-data
Deprovision the related
BareMetalHost
object:kubectl patch baremetalhost -n <projectName> ${BMH} --type merge --patch '{"spec": {"image": null, "userData": null, "online":false}}' kubectl patch baremetalhost -n <projectName> ${BMH} --type merge --patch '{"spec": {"consumerRef": null}}'