Remove a Ceph OSD node

Remove a Ceph OSD nodeΒΆ

This section describes how to remove a Ceph OSD node from a Ceph cluster.

To remove a Ceph OSD node:

  1. If the host is explicitly defined in the model, perform the following steps. Otherwise, proceed to step 2.

    1. In your project repository, remove the following lines from the cluster/ceph/init.yml file or from the pillar based on your environment:

      _param:
         ceph_osd_node05_hostname: osd005
         ceph_osd_node05_address: 172.16.47.72
         ceph_osd_system_codename: xenial
      linux:
        network:
          host:
            osd005:
              address: ${_param:ceph_osd_node05_address}
              names:
              - ${_param:ceph_osd_node05_hostname}
              - ${_param:ceph_osd_node05_hostname}.${_param:cluster_domain}
      
    2. Remove the following lines from the cluster/infra/config/init.yml file or from the pillar based on your environment:

      parameters:
        reclass:
          storage:
            node:
              ceph_osd_node05:
                name: ${_param:ceph_osd_node05_hostname}
                domain: ${_param:cluster_domain}
                classes:
                - cluster.${_param:cluster_name}.ceph.osd
                params:
                  salt_master_host: ${_param:reclass_config_master}
                  linux_system_codename:  ${_param:ceph_osd_system_codename}
                  single_address: ${_param:ceph_osd_node05_address}
                  ceph_crush_parent: rack02
      
  2. Log in to the Jenkins web UI.

  3. Open the Ceph - remove node pipeline.

  4. Specify the following parameters:

    Parameter Description and values
    SALT_MASTER_CREDENTIALS The Salt Master credentials to use for connection, defaults to salt.
    SALT_MASTER_URL The Salt Master node host URL with the salt-api port, defaults to the jenkins_salt_api_url parameter. For example, http://172.18.170.27:6969.
    HOST Add the Salt target name of the Ceph OSD node to remove. For example, osd005*.
    HOST_TYPE Removed since 2019.2.13 update Add osd as the type of Ceph node that is going to be removed.
    OSD Added since 2019.2.13 update Specify the list of Ceph OSDs to remove while keeping the rest and the entire node as part of the cluster. To remove all, leave empty or set to *.
    GENERATE_CRUSHMAP Select if the CRUSH map file should be updated. Enforce has to happen manually unless it is specifically set to be enforced in pillar.
    ADMIN_HOST Removed since 2019.2.13 update Add cmn01* as the Ceph cluster node with the admin keyring.
    WAIT_FOR_HEALTHY Mandatory since the 2019.2.13 maintenance update. Verify that this parameter is selected as it enables the Ceph health check within the pipeline.
    CLEANDISK Added since 2019.2.10 update Mandatory since the 2019.2.13 maintenance update. Select to clean the data or block partitions.
    CLEAN_ORPHANS Added since 2019.2.13 update Select to clean orphaned disks of Ceph OSDs that are no longer part of the cluster.
    FAST_WIPE Added since 2019.2.13 update Deselect if the entire disk needs zero filling.
  5. Click Deploy.

    The Ceph - remove node pipeline workflow:

    1. Mark all Ceph OSDs running on the specified HOST as out. If you selected the WAIT_FOR_HEALTHY parameter, Jenkins pauses the execution of the pipeline until the data migrates to a different Ceph OSD.
    2. Stop all Ceph OSDs services running on the specified HOST.
    3. Remove all Ceph OSDs running on the specified HOST from the CRUSH map.
    4. Remove all Ceph OSD authentication keys running on the specified HOST.
    5. Remove all Ceph OSDs running on the specified HOST from Ceph cluster.
    6. Purge CEPH packages from the specified HOST.
    7. Stop the Salt Minion node on the specified HOST.
    8. Remove all Ceph OSDs running on the specified HOST from Ceph cluster.
    9. Remove the Salt Minion node ID from salt-key on the Salt Master node.
    10. Update the CRUSHMAP file on the I@ceph:setup:crush node if GENERATE_CRUSHMAP was selected. You must manually apply the update unless it is specified otherwise in the pillar.
  6. If you selected GENERATE_CRUSHMAP, check the updated /etc/ceph/crushmap file on cmn01. If it is correct, apply the CRUSH map:

    crushtool -c /etc/ceph/crushmap -o /etc/ceph/crushmap.compiled
    ceph osd setcrushmap -i /etc/ceph/crushmap.compiled