This section describes how to add a Ceph OSD node to an existing Ceph cluster.
Warning
Prior to the 2019.2.10 maintenance update, this feature is available as technical preview only.
To add a Ceph OSD node:
Connect the Ceph OSD salt-minion
node to salt-master
.
In your project repository, if the nodes are not generated dynamically, add
the following lines to cluster/ceph/init.yml
and modify according to
your environment:
_param:
ceph_osd_node05_hostname: osd005
ceph_osd_node05_address: 172.16.47.72
ceph_osd_node05_backend_address: 10.12.100.72
ceph_osd_node05_public_address: 10.13.100.72
ceph_osd_node05_deploy_address: 192.168.0.72
ceph_osd_system_codename: xenial
linux:
network:
host:
osd005:
address: ${_param:ceph_osd_node05_address}
names:
- ${_param:ceph_osd_node05_hostname}
- ${_param:ceph_osd_node05_hostname}.${_param:cluster_domain}
Note
Skip the ceph_osd_node05_deploy_address
parameter if you have
DHCP enabled on a PXE network.
If the nodes are not generated dynamically, add the following lines to the
cluster/infra/config/nodes.yml
and modify according to your environment.
Otherwise, increase the number of generated OSDs.
parameters:
reclass:
storage:
node:
ceph_osd_node05:
name: ${_param:ceph_osd_node05_hostname}
domain: ${_param:cluster_domain}
classes:
- cluster.${_param:cluster_name}.ceph.osd
params:
salt_master_host: ${_param:reclass_config_master}
linux_system_codename: ${_param:ceph_osd_system_codename}
single_address: ${_param:ceph_osd_node05_address}
deploy_address: ${_param:ceph_osd_node05_deploy_address}
backend_address: ${_param:ceph_osd_node05_backend_address}
ceph_public_address: ${_param:ceph_osd_node05_public_address}
ceph_crush_parent: rack02
Note
Skip the deploy_address
parameter if you have DHCP enabled on
a PXE network.
Since 2019.2.3, skip this step
Verify that the cluster/ceph/osd.yml
file and the pillar of the new
Ceph OSD do not contain the following lines:
parameters:
ceph:
osd:
crush_update: false
Log in to the Jenkins web UI.
Select from the following options:
Note
Prior to the 2019.2.10 maintenance update, the Ceph - add node and Ceph - add osd (upmap) Jenkins pipeline jobs are available as technical preview only.
Caution
A large change in the crush weights distribution after the addition of Ceph OSDs can cause massive unexpected rebalancing, affect performance, and in some cases can cause data corruption. Therefore, if you are using Ceph - add node, Mirantis recommends that you add all disks with zero weight and reweight them gradually.
Specify the following parameters:
Parameter | Description and values |
---|---|
SALT_MASTER_CREDENTIALS | The Salt Master credentials to use for connection, defaults to
salt . |
SALT_MASTER_URL | The Salt Master node host URL with the salt-api port, defaults to
the jenkins_salt_api_url parameter. For example,
http://172.18.170.27:6969 . |
HOST | Add the Salt target name of the new Ceph OSD. For example,
osd005* . |
HOST_TYPE Removed since 2019.2.3 update | Add osd as the type of Ceph node that is going to be added. |
CLUSTER_FLAGS Added since 2019.2.7 update | Add a comma-separated list of flags to check after the pipeline execution. |
USE_UPMAP Added since 2019.2.13 update | Use to facilitate the upmap module during rebalancing to minimize
impact on cluster performance. |
Click Deploy.
The Ceph - add node pipeline workflow prior to the 2019.2.3 maintenance update:
reclass
state.linux
, openssh
, salt
, ntp
, rsyslog
,
ceph.osd
states.The Ceph - add node pipeline workflow starting from 2019.2.3 maintenance update:
reclass
state.linux
, openssh
, salt
, ntp
, rsyslog
,
states.upmap
mode.norebalance
flag before adding a Ceph OSD.ceph.osd
state on the selected Ceph OSD node.upmap
back to the old Ceph OSDs.norebalance
flag and verify that the cluster is healthy.If you use a custom CRUSH map, update the CRUSH map:
Verify the updated /etc/ceph/crushmap
file on cmn01
. If correct,
apply the CRUSH map using the following commands:
crushtool -c /etc/ceph/crushmap -o /etc/ceph/crushmap.compiled
ceph osd setcrushmap -i /etc/ceph/crushmap.compiled
Add the following lines to the cluster/ceph/osd.yml
file:
parameters:
ceph:
osd:
crush_update: false
Apply the ceph.osd
state to persist the CRUSH map:
salt -C 'I@ceph:osd' state.sls ceph.osd
Integrate the Ceph OSD nodes with StackLight:
Update the Salt mine:
salt -C 'I@ceph:osd or I@telegraf:remote_agent' state.sls salt.minion.grains
salt -C 'I@ceph:osd or I@telegraf:remote_agent' saltutil.refresh_modules
salt -C 'I@ceph:osd or I@telegraf:remote_agent' mine.update
Wait for one minute.
Apply the following states:
salt -C 'I@ceph:osd or I@telegraf:remote_agent' state.sls telegraf
salt -C 'I@ceph:osd' state.sls fluentd
salt 'mon*' state.sls prometheus