Configure Ceph disks in a host profile¶
This section describes how to configure devices for the Ceph cluster in the
BareMetalHostProfile
object of a managed cluster.
To configure disks for a Ceph cluster:
Open the
BareMetalHostProfile
object of a managed cluster for editing.In the
spec.devices
section, add each disk intended for use as a Ceph OSD data device withsize: 0
andwipe: true
.Example configuration for
sde
-sdh
disks to use as Ceph OSDs:spec: devices: ... - device: byName: /dev/sde size: 0 wipe: true - device: byName: /dev/sdf size: 0 wipe: true - device: byName: /dev/sdg size: 0 wipe: true - device: byName: /dev/sdh size: 0 wipe: true
Since Container Cloud 2.24.0, if you plan to use a separate metadata device for Ceph OSD, configure the
spec.devices
section as described below.Important
Mirantis highly recommends configuring disk partitions for Ceph OSD metadata using
BareMetalHostProfile
.Configuration of a separate metadata device for Ceph OSD
Add the device to
spec.devices
with a single partition that will use the entire disk size.For example, if you plan to use four Ceph OSDs with a separate metadata device for each Ceph OSD, configure the
spec.devices
section as follows:spec: devices: ... - device: byName: /dev/sdi wipe: true partitions: - name: ceph_meta size: 0 wipe: true
Create a volume group on top of the defined partition and create the required number of logical volumes (LVs) on top of the created volume group (VG). Add one logical volume per one Ceph OSD on the node.
Example snippet of an LVM configuration for a Ceph metadata disk:
spec: ... volumeGroups: ... - devices: - partition: ceph_meta name: bluedb logicalVolumes: ... - name: meta_1 size: 25%VG vg: bluedb - name: meta_2 size: 25%VG vg: bluedb - name: meta_3 size: 25%VG vg: bluedb - name: meta_4 size: 25%VG vg: bluedb
Important
Plan LVs of a separate metadata device thoroughly. Any logical volume misconfiguration causes redeployment of all Ceph OSDs that use this disk as metadata devices.
Note
General Ceph recommendation is to have a metadata device in between 1% to 4% of the Ceph OSD data size. Mirantis highly recommends having at least 4% of Ceph OSD data size.
If you plan using a disk as a separate metadata device for 10 Ceph OSDs, define the size of an LV for each Ceph OSD in between 1% to 4% of the corresponding Ceph OSD data size. If RADOS Gateway is enabled, the minimum data size must be 4%. For details, see Ceph documentation: Bluestore config reference.
For example, if the total data size of 10 Ceph OSDs equals
1Tb
with100Gb
each, assign a metadata disk less than10Gb
with1Gb
per each LV. The recommended size is40Gb
with4Gb
per each LV.After applying
BareMetalHostProfile
, the bare metal provider creates an LVM partitioning for the metadata disk and places these volumes as/dev
paths, for example,/dev/bluedb/meta_1
or/dev/bluedb/meta_3
.Example template of a host profile configuration for Ceph
spec: ... devices: ... - device: byName: /dev/sde wipe: true - device: byName: /dev/sdf wipe: true - device: byName: /dev/sdg wipe: true - device: byName: /dev/sdh wipe: true - device: byName: /dev/sdi wipe: true partitions: - name: ceph_meta size: 0 wipe: true volumeGroups: ... - devices: - partition: ceph_meta name: bluedb logicalVolumes: ... - name: meta_1 size: 25%VG vg: bluedb - name: meta_2 size: 25%VG vg: bluedb - name: meta_3 size: 25%VG vg: bluedb - name: meta_4 size: 25%VG vg: bluedb
After applying such
BareMetalHostProfile
to a node, thenodes
spec of theKaaSCephCluster
object contains the followingstorageDevices
section:spec: cephClusterSpec: ... nodes: ... machine-1: ... storageDevices: - name: sde config: metadataDevice: /dev/bluedb/meta_1 - name: sdf config: metadataDevice: /dev/bluedb/meta_2 - name: sdg config: metadataDevice: /dev/bluedb/meta_3 - name: sdh config: metadataDevice: /dev/bluedb/meta_4