Newer documentation is now live.You are currently reading an older version.

Configure a separate metadata device for Ceph OSD

Available since MOSK 23.2

If you plan to use a separate metadata device for Ceph OSD, configure the spec.devices section of the BareMetalHostProfile object as described below.

Important

Mirantis highly recommends configuring disk partitions for Ceph OSD metadata using BareMetalHostProfile.

To configure a separate metadata device for Ceph OSD:

  1. Add the device to spec.devices with a single partition that will use the entire disk size.

    For example, if you plan to use four Ceph OSDs with a separate metadata device for each Ceph OSD, configure the spec.devices section as follows:

    spec:
      devices:
      ...
      - device:
          byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:5
          wipe: true
        partitions:
        - name: ceph_meta
          size: 0
          wipe: true
    
  2. Create a volume group on top of the defined partition and create the required number of logical volumes (LVs) on top of the created volume group (VG). Add one logical volume per one Ceph OSD on the node.

    Example snippet of an LVM configuration for a Ceph metadata disk:

    spec:
      ...
      volumeGroups:
      ...
      - devices:
        - partition: ceph_meta
        name: bluedb
      logicalVolumes:
      ...
      - name: meta_1
        size: 25%VG
        vg: bluedb
      - name: meta_2
        size: 25%VG
        vg: bluedb
      - name: meta_3
        size: 25%VG
        vg: bluedb
      - name: meta_4
        size: 25%VG
        vg: bluedb
    

    Important

    Plan LVs of a separate metadata device thoroughly. Any logical volume misconfiguration causes redeployment of all Ceph OSDs that use this disk as metadata devices.

    Note

    General Ceph recommendation is to have a metadata device in between 1% to 4% of the Ceph OSD data size. Mirantis highly recommends having at least 4% of Ceph OSD data size.

    If you plan to use a disk as a separate metadata device for 10 Ceph OSDs, define the size of an LV for each Ceph OSD in between 1% to 4% of the corresponding Ceph OSD data size. If RADOS Gateway is enabled, the minimum data size must be 4%. For details, see Ceph documentation: Bluestore config reference.

    For example, if the total data size of 10 Ceph OSDs equals 1Tb with 100Gb each, assign a metadata disk less than 10Gb with 1Gb per each LV. The recommended size is 40Gb with 4Gb per each LV.

    After applying BareMetalHostProfile, the bare-metal provider creates an LVM partitioning for the metadata disk and places these volumes as /dev paths, for example, /dev/bluedb/meta_1 or /dev/bluedb/meta_3.

Example template of a host profile configuration for Ceph:

spec:
  ...
  devices:
  ...
  - device:
      byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:1
      wipe: true
  - device:
      byName: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:2
      wipe: true
  - device:
      byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:3
      wipe: true
  - device:
      byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:4
      wipe: true
  - device:
      byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:5
      wipe: true
    partitions:
    - name: ceph_meta
      size: 0
      wipe: true
  volumeGroups:
  ...
  - devices:
    - partition: ceph_meta
    name: bluedb
  logicalVolumes:
  ...
  - name: meta_1
    size: 25%VG
    vg: bluedb
  - name: meta_2
    size: 25%VG
    vg: bluedb
  - name: meta_3
    size: 25%VG
    vg: bluedb
  - name: meta_4
    size: 25%VG
    vg: bluedb

After applying such BareMetalHostProfile to a node, the resulting nodes spec of the KaaSCephCluster object contains the following storageDevices section:

spec:
  cephClusterSpec:
    ...
    nodes:
      ...
      machine-1:
        ...
        storageDevices:
        - fullPath: /dev/disk/by-id/scsi-SATA_ST4000NM002A-2HZ_WS20NNKC
          config:
            metadataDevice: /dev/bluedb/meta_1
        - fullPath: /dev/disk/by-id/ata-ST4000NM002A-2HZ101_WS20NEGE
          config:
            metadataDevice: /dev/bluedb/meta_2
        - fullPath: /dev/disk/by-id/scsi-0ATA_ST4000NM002A-2HZ_WS20LEL3
          config:
            metadataDevice: /dev/bluedb/meta_3
        - fullPath: /dev/disk/by-id/ata-HGST_HUS724040ALA640_PN1334PEDN9SSU
          config:
            metadataDevice: /dev/bluedb/meta_4
spec:
  cephClusterSpec:
    ...
    nodes:
      ...
      machine-1:
        ...
        storageDevices:
        - name: sde
          config:
            metadataDevice: /dev/bluedb/meta_1
        - name: sdf
          config:
            metadataDevice: /dev/bluedb/meta_2
        - name: sdg
          config:
            metadataDevice: /dev/bluedb/meta_3
        - name: sdh
          config:
            metadataDevice: /dev/bluedb/meta_4