Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Now, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Configure Ceph disks in a host profile¶
This section describes how to configure devices for the Ceph cluster in the
BareMetalHostProfile object of a MOSK cluster.
To configure disks for a Ceph cluster:
Open the
BareMetalHostProfileobject of a MOSK cluster for editing.In the
spec.devicessection, add each disk intended for use as a Ceph OSD data device withsize: 0andwipe: true.Example configuration for
sde-sdhdisks to use as Ceph OSDs:spec: devices: ... - device: byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:1 size: 0 wipe: true - device: byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:2 size: 0 wipe: true - device: byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:3 size: 0 wipe: true - device: byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:4 size: 0 wipe: true
If you plan to use a separate metadata device for Ceph OSD, configure the
spec.devicessection as described below.Important
Mirantis highly recommends configuring disk partitions for Ceph OSD metadata using
BareMetalHostProfile.Configuration of a separate metadata device for Ceph OSD
Add the device to
spec.deviceswith a single partition that will use the entire disk size.For example, if you plan to use four Ceph OSDs with a separate metadata device for each Ceph OSD, configure the
spec.devicessection as follows:spec: devices: ... - device: byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:5 wipe: true partitions: - name: ceph_meta size: 0 wipe: true
Create a volume group on top of the defined partition and create the required number of logical volumes (LVs) on top of the created volume group (VG). Add one logical volume per one Ceph OSD on the node.
Example snippet of an LVM configuration for a Ceph metadata disk:
spec: ... volumeGroups: ... - devices: - partition: ceph_meta name: bluedb logicalVolumes: ... - name: meta_1 size: 25%VG vg: bluedb - name: meta_2 size: 25%VG vg: bluedb - name: meta_3 size: 25%VG vg: bluedb - name: meta_4 size: 25%VG vg: bluedb
Important
Plan LVs of a separate metadata device thoroughly. Any logical volume misconfiguration causes redeployment of all Ceph OSDs that use this disk as metadata devices.
Note
General Ceph recommendation is to have a metadata device in between 1% to 4% of the Ceph OSD data size. Mirantis highly recommends having at least 4% of Ceph OSD data size.
If you plan using a disk as a separate metadata device for 10 Ceph OSDs, define the size of an LV for each Ceph OSD in between 1% to 4% of the corresponding Ceph OSD data size. If RADOS Gateway is enabled, the minimum data size must be 4%. For details, see Ceph documentation: Bluestore config reference.
For example, if the total data size of 10 Ceph OSDs equals
1Tbwith100Gbeach, assign a metadata disk less than10Gbwith1Gbper each LV. The recommended size is40Gbwith4Gbper each LV.After applying
BareMetalHostProfile, the bare metal provider creates an LVM partitioning for the metadata disk and places these volumes as/devpaths, for example,/dev/bluedb/meta_1or/dev/bluedb/meta_3.Example template of a host profile configuration for Ceph
spec: ... devices: ... - device: byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:1 wipe: true - device: byName: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:2 wipe: true - device: byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:3 wipe: true - device: byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:4 wipe: true - device: byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:5 wipe: true partitions: - name: ceph_meta size: 0 wipe: true volumeGroups: ... - devices: - partition: ceph_meta name: bluedb logicalVolumes: ... - name: meta_1 size: 25%VG vg: bluedb - name: meta_2 size: 25%VG vg: bluedb - name: meta_3 size: 25%VG vg: bluedb - name: meta_4 size: 25%VG vg: bluedb
After applying such
BareMetalHostProfileto a node, thenodesspec of theKaaSCephClusterobject contains the followingstorageDevicessection:spec: cephClusterSpec: ... nodes: ... machine-1: ... storageDevices: - fullPath: /dev/disk/by-id/scsi-SATA_ST4000NM002A-2HZ_WS20NNKC config: metadataDevice: /dev/bluedb/meta_1 - fullPath: /dev/disk/by-id/ata-ST4000NM002A-2HZ101_WS20NEGE config: metadataDevice: /dev/bluedb/meta_2 - fullPath: /dev/disk/by-id/scsi-0ATA_ST4000NM002A-2HZ_WS20LEL3 config: metadataDevice: /dev/bluedb/meta_3 - fullPath: /dev/disk/by-id/ata-HGST_HUS724040ALA640_PN1334PEDN9SSU config: metadataDevice: /dev/bluedb/meta_4