This section describes how to remove a Ceph OSD node from a Ceph cluster.
To remove a Ceph OSD node:
If the host is explicitly defined in the model, perform the following steps. Otherwise, proceed to step 2.
In your project repository, remove the following lines from the
cluster/ceph/init.yml
file or from the pillar based on your
environment:
_param:
ceph_osd_node05_hostname: osd005
ceph_osd_node05_address: 172.16.47.72
ceph_osd_system_codename: xenial
linux:
network:
host:
osd005:
address: ${_param:ceph_osd_node05_address}
names:
- ${_param:ceph_osd_node05_hostname}
- ${_param:ceph_osd_node05_hostname}.${_param:cluster_domain}
Remove the following lines from the cluster/infra/config/init.yml
file or from the pillar based on your environment:
parameters:
reclass:
storage:
node:
ceph_osd_node05:
name: ${_param:ceph_osd_node05_hostname}
domain: ${_param:cluster_domain}
classes:
- cluster.${_param:cluster_name}.ceph.osd
params:
salt_master_host: ${_param:reclass_config_master}
linux_system_codename: ${_param:ceph_osd_system_codename}
single_address: ${_param:ceph_osd_node05_address}
ceph_crush_parent: rack02
Log in to the Jenkins web UI.
Open the Ceph - remove node pipeline.
Specify the following parameters:
Parameter | Description and values |
---|---|
SALT_MASTER_CREDENTIALS | The Salt Master credentials to use for connection, defaults to
salt . |
SALT_MASTER_URL | The Salt Master node host URL with the salt-api port, defaults to
the jenkins_salt_api_url parameter. For example,
http://172.18.170.27:6969 . |
HOST | Add the Salt target name of the Ceph OSD node to remove. For
example, osd005* . |
HOST_TYPE Removed since 2019.2.13 update | Add osd as the type of Ceph node that is going to be removed. |
OSD Added since 2019.2.13 update | Specify the list of Ceph OSDs to remove while keeping the rest and
the entire node as part of the cluster. To remove all, leave empty or
set to * . |
GENERATE_CRUSHMAP | Select if the CRUSH map file should be updated. Enforce has to happen manually unless it is specifically set to be enforced in pillar. |
ADMIN_HOST Removed since 2019.2.13 update | Add cmn01* as the Ceph cluster node with the admin keyring. |
WAIT_FOR_HEALTHY | Mandatory since the 2019.2.13 maintenance update. Verify that this parameter is selected as it enables the Ceph health check within the pipeline. |
CLEANDISK Added since 2019.2.10 update | Mandatory since the 2019.2.13 maintenance update. Select to clean the data or block partitions. |
CLEAN_ORPHANS Added since 2019.2.13 update | Select to clean orphaned disks of Ceph OSDs that are no longer part of the cluster. |
FAST_WIPE Added since 2019.2.13 update | Deselect if the entire disk needs zero filling. |
Click Deploy.
The Ceph - remove node pipeline workflow:
HOST
as out
. If you
selected the WAIT_FOR_HEALTHY
parameter, Jenkins pauses the
execution of the pipeline until the data migrates to a different Ceph
OSD.HOST
from the CRUSH
map.HOST
from Ceph cluster.HOST
.HOST
.HOST
from Ceph cluster.If you selected GENERATE_CRUSHMAP, check the updated
/etc/ceph/crushmap
file on cmn01
. If it is correct, apply the CRUSH
map:
crushtool -c /etc/ceph/crushmap -o /etc/ceph/crushmap.compiled
ceph osd setcrushmap -i /etc/ceph/crushmap.compiled