This section describes how to add a Ceph OSD node to an existing Ceph cluster.
Warning
Prior to the 2019.2.10 maintenance update, this feature is available as technical preview only.
To add a Ceph OSD node:
Connect the Ceph OSD salt-minion
node to salt-master
.
In your project repository, if the nodes are not generated dynamically, add
the following lines to cluster/ceph/init.yml
and modify according to
your environment:
_param:
ceph_osd_node05_hostname: osd005
ceph_osd_node05_address: 172.16.47.72
ceph_osd_system_codename: xenial
linux:
network:
host:
osd005:
address: ${_param:ceph_osd_node05_address}
names:
- ${_param:ceph_osd_node05_hostname}
- ${_param:ceph_osd_node05_hostname}.${_param:cluster_domain}
If the nodes are not generated dynamically, add the following lines to the
cluster/infra/config/init.yml
and modify according to your environment.
Otherwise, increase the number of generated OSDs.
parameters:
reclass:
storage:
node:
ceph_osd_node05:
name: ${_param:ceph_osd_node05_hostname}
domain: ${_param:cluster_domain}
classes:
- cluster.${_param:cluster_name}.ceph.osd
params:
salt_master_host: ${_param:reclass_config_master}
linux_system_codename: ${_param:ceph_osd_system_codename}
single_address: ${_param:ceph_osd_node05_address}
ceph_crush_parent: rack02
Since 2019.2.3, skip this step
Verify that the cluster/ceph/osd.yml
file and the pillar of the new
Ceph OSD do not contain the following lines:
parameters:
ceph:
osd:
crush_update: false
Log in to the Jenkins web UI.
Select from the following options:
For MCP versions starting from the 2019.2.10 maintenance update, open the Ceph - add osd (upmap) pipeline.
For MCP versions prior to the 2019.2.10 maintenance update, open the Ceph - add node pipeline.
Note
Prior to the 2019.2.10 maintenance update, the Ceph - add node and Ceph - add osd (upmap) Jenkins pipeline jobs are available as technical preview only.
Caution
A large change in the crush weights distribution after the addition of Ceph OSDs can cause massive unexpected rebalancing, affect performance, and in some cases can cause data corruption. Therefore, if you are using Ceph - add node, Mirantis recommends that you add all disks with zero weight and reweight them gradually.
Specify the following parameters:
Parameter |
Description and values |
---|---|
SALT_MASTER_CREDENTIALS |
The Salt Master credentials to use for connection, defaults to
|
SALT_MASTER_URL |
The Salt Master node host URL with the |
HOST |
Add the Salt target name of the new Ceph OSD. For example,
|
HOST_TYPE Removed since 2019.2.3 update |
Add osd as the type of Ceph node that is going to be added. |
CLUSTER_FLAGS Added since 2019.2.7 update |
Add a comma-separated list of flags to check after the pipeline execution. |
Click Deploy.
The Ceph - add node pipeline workflow prior to the 2019.2.3 maintenance update:
Apply the reclass
state.
Apply the linux
, openssh
, salt
, ntp
, rsyslog
,
ceph.osd
states.
The Ceph - add node pipeline workflow starting from 2019.2.3 maintenance update:
Apply the reclass
state.
Verify that all installed Ceph clients have the Luminous version.
Apply the linux
, openssh
, salt
, ntp
, rsyslog
,
states.
Set the Ceph cluster compatibility to Luminous.
Switch the balancer module to the upmap
mode.
Set the norebalance
flag before adding a Ceph OSD.
Apply the ceph.osd
state on the selected Ceph OSD node.
Update the mappings for the remapped placement group (PG) using
upmap
back to the old Ceph OSDs.
Unset the norebalance
flag and verify that the cluster is healthy.
If you use a custom CRUSH map, update the CRUSH map:
Verify the updated /etc/ceph/crushmap
file on cmn01
. If correct,
apply the CRUSH map using the following commands:
crushtool -c /etc/ceph/crushmap -o /etc/ceph/crushmap.compiled
ceph osd setcrushmap -i /etc/ceph/crushmap.compiled
Add the following lines to the cluster/ceph/osd.yml
file:
parameters:
ceph:
osd:
crush_update: false
Apply the ceph.osd
state to persist the CRUSH map:
salt -C 'I@ceph:osd' state.sls ceph.osd
Integrate the Ceph OSD nodes with StackLight:
Update the Salt mine:
salt -C 'I@ceph:osd or I@telegraf:remote_agent' state.sls salt.minion.grains
salt -C 'I@ceph:osd or I@telegraf:remote_agent' saltutil.refresh_modules
salt -C 'I@ceph:osd or I@telegraf:remote_agent' mine.update
Wait for one minute.
Apply the following states:
salt -C 'I@ceph:osd or I@telegraf:remote_agent' state.sls telegraf
salt -C 'I@ceph:osd' state.sls fluentd
salt 'mon*' state.sls prometheus