Create a volume group on top of the defined partition and create the
required number of logical volumes (LVs) on top of the created volume
group (VG). Add one logical volume per one Ceph OSD on the node.
Example snippet of an LVM configuration for a Ceph metadata disk:
Plan LVs of a separate metadata device thoroughly.
Any logical volume misconfiguration causes redeployment of all
Ceph OSDs that use this disk as metadata devices.
Note
General Ceph recommendation is to have a metadata device in
between 1% to 4% of the Ceph OSD data size. Mirantis highly
recommends having at least 4% of Ceph OSD data size.
If you plan using a disk as a separate metadata device for 10 Ceph
OSDs, define the size of an LV for each Ceph OSD in between 1% to
4% of the corresponding Ceph OSD data size. If RADOS Gateway is
enabled, the minimum data size must be 4%. For details, see
Ceph documentation: Bluestore config reference.
For example, if the total data size of 10 Ceph OSDs equals 1Tb
with 100Gb each, assign a metadata disk less than 10Gb with
1Gb per each LV. The recommended size is 40Gb with 4Gb
per each LV.
After applying BareMetalHostProfile, the bare metal provider
creates an LVM partitioning for the metadata disk and places
these volumes as /dev paths, for example, /dev/bluedb/meta_1
or /dev/bluedb/meta_3.
Example template of a host profile configuration for Ceph