Configure a separate metadata device for Ceph OSD
If you plan to use a separate metadata device for Ceph OSD, configure the
spec.devices section of the BareMetalHostProfile object as described
below.
Important
Mirantis highly recommends configuring disk partitions for
Ceph OSD metadata using BareMetalHostProfile.
To configure a separate metadata device for Ceph OSD:
Add the device to
spec.deviceswith a single partition that will use the entire disk size.For example, if you plan to use four Ceph OSDs with a separate metadata device for each Ceph OSD, configure the
spec.devicessection as follows:spec: devices: ... - device: byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:5 wipe: true partitions: - name: ceph_meta size: 0 wipe: true
Create a volume group on top of the defined partition and create the required number of logical volumes (LVs) on top of the created volume group (VG). Add one logical volume per one Ceph OSD on the node.
Example snippet of an LVM configuration for a Ceph metadata disk:
spec: ... volumeGroups: ... - devices: - partition: ceph_meta name: bluedb logicalVolumes: ... - name: meta_1 size: 25%VG vg: bluedb - name: meta_2 size: 25%VG vg: bluedb - name: meta_3 size: 25%VG vg: bluedb - name: meta_4 size: 25%VG vg: bluedb
Important
Plan LVs of a separate metadata device thoroughly. Any logical volume misconfiguration causes redeployment of all Ceph OSDs that use this disk as metadata devices.
Note
General Ceph recommendation is to have a metadata device in between 1% to 4% of the Ceph OSD data size. Mirantis highly recommends having at least 4% of Ceph OSD data size.
If you plan to use a disk as a separate metadata device for 10 Ceph OSDs, define the size of an LV for each Ceph OSD in between 1% to 4% of the corresponding Ceph OSD data size. If RADOS Gateway is enabled, the minimum data size must be 4%. For details, see Ceph documentation: Bluestore config reference.
For example, if the total data size of 10 Ceph OSDs equals
1Tbwith100Gbeach, assign a metadata disk less than10Gbwith1Gbper each LV. The recommended size is40Gbwith4Gbper each LV.After applying
BareMetalHostProfile, the bare-metal provider creates an LVM partitioning for the metadata disk and places these volumes as/devpaths, for example,/dev/bluedb/meta_1or/dev/bluedb/meta_3.
Example template of a host profile configuration for Ceph:
spec:
...
devices:
...
- device:
byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:1
wipe: true
- device:
byName: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:2
wipe: true
- device:
byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:3
wipe: true
- device:
byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:4
wipe: true
- device:
byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:5
wipe: true
partitions:
- name: ceph_meta
size: 0
wipe: true
volumeGroups:
...
- devices:
- partition: ceph_meta
name: bluedb
logicalVolumes:
...
- name: meta_1
size: 25%VG
vg: bluedb
- name: meta_2
size: 25%VG
vg: bluedb
- name: meta_3
size: 25%VG
vg: bluedb
- name: meta_4
size: 25%VG
vg: bluedb
After applying such BareMetalHostProfile to a node, the nodes
spec of the MiraCeph object contains the following storageDevices
section:
spec:
...
nodes:
- name: storage-worker-1
...
devices:
- config:
deviceClass: hdd
metadataDevice: /dev/bluedb/meta_1
fullPath: /dev/disk/by-id/scsi-0ATA_ST4000NM002A-2HZ101_WS20NEGE
- config:
deviceClass: hdd
metadataDevice: /dev/bluedb/meta_2
fullPath: /dev/disk/by-id/scsi-0ATA_ST4000NM002A-2HZ_WS20LEL3
- config:
deviceClass: hdd
metadataDevice: /dev/bluedb/meta_3
fullPath: /dev/disk/by-id/ata-HGST_HUS724040ALA640_PN1334PEDN9SSU
- config:
deviceClass: hdd
metadataDevice: /dev/bluedb/meta_4
fullPath: /dev/disk/by-id/scsi-0ATA_HGST_HUS724040ALA640_PN1334PEDN9SSU