Note
This feature is available starting from the MCP 2019.2.7 maintenance update. Before using the feature, follow the steps described in Apply maintenance updates.
Warning
Prior to the 2019.2.10 maintenance update, ceph-volume is available as technical preview only.
Starting from the 2019.2.10 maintenance update, the ceph-volume tool is fully supported and must be enabled prior to upgrading from Ceph Luminous to Nautilus. The ceph-disk tool is deprecated.
This section describes how to enable the ceph-volume command-line
tool that enables you to deploy and inspect Ceph OSDs using the Logical Volume
Management (LVM) functionality for provisioning block devices. The main
difference between ceph-disk and ceph-volume is that
ceph-volume does not automatically partition disks used for
block.db
. However, partitioning is performed within the procedure below.
To enable the ceph-volume tool:
Open your Git project repository with the Reclass model on the cluster level.
If you are upgrading from Ceph Luminous to Nautilus, specify the
legacy_disks
pillar in classes/cluster/<cluster_name>/ceph/osd.yml
to allow the operation of both ceph-disk and
ceph-volume-deployed OSDs:
parameters:
ceph:
osd:
legacy_disks:
0:
class: hdd
weight: 0.048691
dev: /dev/vdc
In classes/cluster/<cluster_name>/ceph/osd.yml
, define the partitions
and logical volumes to use as OSD devices.
parameters:
linux:
storage:
disk:
osd_blockdev:
startsector: 1
name: /dev/vdd
type: gpt
partitions:
- size: 10240
- size: 10240
lvm:
ceph:
enabled: true
devices:
- /dev/vdc
volume:
osd01:
size: 15G
osd01:
size: 15G
In classes/cluster/<cluster_name>/ceph/osd.yml
, set the lvm_enabled
parameter to True
:
parameters:
ceph:
osd:
lvm_enabled: True
Apply the changes:
salt -C 'I@ceph:osd' saltutil.refresh_pillar
Remove the OSD nodes as described in Remove a Ceph OSD node.
Add new OSD nodes as described in Add a Ceph OSD node.